Test Report: Docker_Windows 14123

                    
                      c44d2e2b1c943218fa489120d8944b8370d0b8b1:2022-06-04:24265
                    
                

Test fail (149/220)

Order failed test Duration
20 TestOffline 92.49
22 TestAddons/Setup 74.58
23 TestCertOptions 97.2
24 TestCertExpiration 384.24
25 TestDockerFlags 97.09
26 TestForceSystemdFlag 93.19
27 TestForceSystemdEnv 92.97
32 TestErrorSpam/setup 74.12
41 TestFunctional/serial/StartWithProxy 78.06
42 TestFunctional/serial/AuditLog 0
43 TestFunctional/serial/SoftStart 113.5
44 TestFunctional/serial/KubeContext 4.18
45 TestFunctional/serial/KubectlGetPods 4.34
52 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 3.03
53 TestFunctional/serial/CacheCmd/cache/cache_reload 12.09
55 TestFunctional/serial/MinikubeKubectlCmd 5.91
56 TestFunctional/serial/MinikubeKubectlCmdDirectly 5.86
57 TestFunctional/serial/ExtraConfig 113.08
58 TestFunctional/serial/ComponentHealth 4.15
59 TestFunctional/serial/LogsCmd 3.64
60 TestFunctional/serial/LogsFileCmd 4.39
66 TestFunctional/parallel/StatusCmd 13.14
69 TestFunctional/parallel/ServiceCmd 5.46
70 TestFunctional/parallel/ServiceCmdConnect 5.33
72 TestFunctional/parallel/PersistentVolumeClaim 4.31
74 TestFunctional/parallel/SSHCmd 10.63
75 TestFunctional/parallel/CpCmd 13.2
76 TestFunctional/parallel/MySQL 4.6
77 TestFunctional/parallel/FileSync 7.51
78 TestFunctional/parallel/CertSync 23.73
82 TestFunctional/parallel/NodeLabels 4.49
84 TestFunctional/parallel/NonActiveRuntimeDisabled 3.28
86 TestFunctional/parallel/DockerEnv/powershell 9.44
91 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
98 TestFunctional/parallel/UpdateContextCmd/no_changes 3.24
99 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 3.36
100 TestFunctional/parallel/UpdateContextCmd/no_clusters 3.24
102 TestFunctional/parallel/ImageCommands/ImageListShort 3.06
103 TestFunctional/parallel/ImageCommands/ImageListTable 2.89
104 TestFunctional/parallel/ImageCommands/ImageListJson 3.03
105 TestFunctional/parallel/ImageCommands/ImageListYaml 2.94
106 TestFunctional/parallel/ImageCommands/ImageBuild 8.94
107 TestFunctional/parallel/ImageCommands/Setup 2.17
109 TestFunctional/parallel/Version/components 3.15
110 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.27
111 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.47
112 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.1
113 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.12
115 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.25
116 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.09
122 TestIngressAddonLegacy/StartLegacyK8sCluster 76.18
124 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 7.03
126 TestIngressAddonLegacy/serial/ValidateIngressAddons 3.89
129 TestJSONOutput/start/Command 73.8
130 TestJSONOutput/start/Audit 0
132 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
133 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0.01
135 TestJSONOutput/pause/Command 3.06
136 TestJSONOutput/pause/Audit 0
141 TestJSONOutput/unpause/Command 3.07
142 TestJSONOutput/unpause/Audit 0
147 TestJSONOutput/stop/Command 21.95
148 TestJSONOutput/stop/Audit 0
150 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
154 TestKicCustomNetwork/create_custom_network 246.18
156 TestKicExistingNetwork 4.12
157 TestKicCustomSubnet 235.58
159 TestMinikubeProfile 94.53
162 TestMountStart/serial/StartWithMountFirst 78
165 TestMultiNode/serial/FreshStart2Nodes 78.14
166 TestMultiNode/serial/DeployApp2Nodes 16.77
167 TestMultiNode/serial/PingHostFrom2Pods 5.66
168 TestMultiNode/serial/AddNode 6.99
169 TestMultiNode/serial/ProfileList 7.64
170 TestMultiNode/serial/CopyFile 6.61
171 TestMultiNode/serial/StopNode 9.96
172 TestMultiNode/serial/StartAfterStop 8.35
173 TestMultiNode/serial/RestartKeepsNodes 136.89
174 TestMultiNode/serial/DeleteNode 9.9
175 TestMultiNode/serial/StopMultiNode 31.48
176 TestMultiNode/serial/RestartMultiNode 114.92
177 TestMultiNode/serial/ValidateNameConflict 162.7
181 TestPreload 85.44
182 TestScheduledStopWindows 85.28
186 TestInsufficientStorage 28.85
187 TestRunningBinaryUpgrade 282.38
189 TestKubernetesUpgrade 112.59
190 TestMissingContainerUpgrade 206.43
201 TestNoKubernetes/serial/StartWithK8s 83.15
202 TestStoppedBinaryUpgrade/Upgrade 295.21
203 TestNoKubernetes/serial/StartWithStopK8s 119.45
215 TestNoKubernetes/serial/Start 101.12
217 TestPause/serial/Start 81.69
218 TestStoppedBinaryUpgrade/MinikubeLogs 3.41
220 TestStartStop/group/old-k8s-version/serial/FirstStart 81.06
222 TestStartStop/group/embed-certs/serial/FirstStart 82.13
224 TestStartStop/group/no-preload/serial/FirstStart 80.68
225 TestStartStop/group/old-k8s-version/serial/DeployApp 8.42
226 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 7.2
227 TestStartStop/group/old-k8s-version/serial/Stop 26.89
228 TestStartStop/group/embed-certs/serial/DeployApp 8.36
229 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 7.27
230 TestStartStop/group/embed-certs/serial/Stop 27.34
231 TestStartStop/group/no-preload/serial/DeployApp 8.42
232 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 10.21
233 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 7.32
234 TestStartStop/group/old-k8s-version/serial/SecondStart 117.74
235 TestStartStop/group/no-preload/serial/Stop 27
236 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 9.87
237 TestStartStop/group/embed-certs/serial/SecondStart 118.52
238 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 9.87
239 TestStartStop/group/no-preload/serial/SecondStart 118.43
241 TestStartStop/group/default-k8s-different-port/serial/FirstStart 81.45
242 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 4.18
243 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 4.26
244 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 7.45
245 TestStartStop/group/old-k8s-version/serial/Pause 11.57
246 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.55
247 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 4.1
248 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 4.49
249 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 7.52
250 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 7.46
251 TestStartStop/group/default-k8s-different-port/serial/Stop 27.1
252 TestStartStop/group/embed-certs/serial/Pause 11.54
253 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 4.22
255 TestStartStop/group/newest-cni/serial/FirstStart 81.78
256 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 4.45
257 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 7.48
258 TestStartStop/group/no-preload/serial/Pause 11.78
259 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 10.2
260 TestNetworkPlugins/group/auto/Start 77.54
261 TestStartStop/group/default-k8s-different-port/serial/SecondStart 118.99
262 TestNetworkPlugins/group/kindnet/Start 77.35
265 TestStartStop/group/newest-cni/serial/Stop 26.82
266 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 10.05
267 TestNetworkPlugins/group/cilium/Start 77.74
268 TestStartStop/group/newest-cni/serial/SecondStart 118.34
269 TestNetworkPlugins/group/calico/Start 77.65
270 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 3.97
271 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 4.18
272 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 7.32
273 TestStartStop/group/default-k8s-different-port/serial/Pause 11.53
274 TestNetworkPlugins/group/false/Start 77.46
275 TestNetworkPlugins/group/bridge/Start 77.25
276 TestNetworkPlugins/group/enable-default-cni/Start 77.08
279 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 7.28
280 TestStartStop/group/newest-cni/serial/Pause 11.53
281 TestNetworkPlugins/group/kubenet/Start 75.22
x
+
TestOffline (92.49s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20220604161047-5712 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p offline-docker-20220604161047-5712 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: exit status 60 (1m19.2232222s)

                                                
                                                
-- stdout --
	* [offline-docker-20220604161047-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node offline-docker-20220604161047-5712 in cluster offline-docker-20220604161047-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-20220604161047-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:10:47.512155    7044 out.go:296] Setting OutFile to fd 920 ...
	I0604 16:10:47.585155    7044 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:10:47.585155    7044 out.go:309] Setting ErrFile to fd 752...
	I0604 16:10:47.585155    7044 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:10:47.595159    7044 out.go:303] Setting JSON to false
	I0604 16:10:47.598199    7044 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10119,"bootTime":1654348928,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:10:47.598199    7044 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:10:47.602151    7044 out.go:177] * [offline-docker-20220604161047-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:10:47.608156    7044 notify.go:193] Checking for updates...
	I0604 16:10:47.612149    7044 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:10:47.618159    7044 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:10:47.626646    7044 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:10:47.635479    7044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:10:47.641457    7044 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:10:47.641457    7044 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:10:50.357875    7044 docker.go:137] docker version: linux-20.10.16
	I0604 16:10:50.365329    7044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:10:52.387713    7044 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.022362s)
	I0604 16:10:52.388297    7044 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:10:51.3603557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:10:52.392659    7044 out.go:177] * Using the docker driver based on user configuration
	I0604 16:10:52.394774    7044 start.go:284] selected driver: docker
	I0604 16:10:52.394774    7044 start.go:806] validating driver "docker" against <nil>
	I0604 16:10:52.394774    7044 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:10:52.466796    7044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:10:54.595233    7044 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1281995s)
	I0604 16:10:54.595909    7044 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:10:53.5441914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:10:54.596614    7044 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:10:54.597859    7044 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:10:54.606328    7044 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:10:54.609009    7044 cni.go:95] Creating CNI manager for ""
	I0604 16:10:54.609444    7044 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:10:54.609444    7044 start_flags.go:306] config:
	{Name:offline-docker-20220604161047-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:offline-docker-20220604161047-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:10:54.613350    7044 out.go:177] * Starting control plane node offline-docker-20220604161047-5712 in cluster offline-docker-20220604161047-5712
	I0604 16:10:54.615363    7044 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:10:54.617367    7044 out.go:177] * Pulling base image ...
	I0604 16:10:54.621357    7044 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:10:54.621357    7044 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:10:54.621357    7044 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:10:54.621357    7044 cache.go:57] Caching tarball of preloaded images
	I0604 16:10:54.621357    7044 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:10:54.622358    7044 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:10:54.622358    7044 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\offline-docker-20220604161047-5712\config.json ...
	I0604 16:10:54.622358    7044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\offline-docker-20220604161047-5712\config.json: {Name:mk47994c9e4c4f509bb0e74c784d394b76d83f82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:10:55.752854    7044 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:10:55.752854    7044 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:10:55.752854    7044 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:10:55.752854    7044 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:10:55.752854    7044 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:10:55.752854    7044 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:10:55.752854    7044 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:10:55.752854    7044 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:10:55.752854    7044 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:10:58.390709    7044 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:10:58.390802    7044 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:10:58.390910    7044 start.go:352] acquiring machines lock for offline-docker-20220604161047-5712: {Name:mk5ad59aa788c71c64e8030f87a0c48222def3a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:10:58.391363    7044 start.go:356] acquired machines lock for "offline-docker-20220604161047-5712" in 300.2µs
	I0604 16:10:58.391659    7044 start.go:91] Provisioning new machine with config: &{Name:offline-docker-20220604161047-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:offline-docker-20220604161047-5712 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:10:58.391779    7044 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:10:58.680395    7044 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:10:58.681115    7044 start.go:165] libmachine.API.Create for "offline-docker-20220604161047-5712" (driver="docker")
	I0604 16:10:58.681115    7044 client.go:168] LocalClient.Create starting
	I0604 16:10:58.681776    7044 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:10:58.682022    7044 main.go:134] libmachine: Decoding PEM data...
	I0604 16:10:58.682022    7044 main.go:134] libmachine: Parsing certificate...
	I0604 16:10:58.682022    7044 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:10:58.682022    7044 main.go:134] libmachine: Decoding PEM data...
	I0604 16:10:58.682022    7044 main.go:134] libmachine: Parsing certificate...
	I0604 16:10:58.691758    7044 cli_runner.go:164] Run: docker network inspect offline-docker-20220604161047-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:11:00.267465    7044 cli_runner.go:211] docker network inspect offline-docker-20220604161047-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:11:00.267465    7044 cli_runner.go:217] Completed: docker network inspect offline-docker-20220604161047-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.5756901s)
	I0604 16:11:00.277976    7044 network_create.go:272] running [docker network inspect offline-docker-20220604161047-5712] to gather additional debugging logs...
	I0604 16:11:00.277976    7044 cli_runner.go:164] Run: docker network inspect offline-docker-20220604161047-5712
	W0604 16:11:01.374972    7044 cli_runner.go:211] docker network inspect offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:01.374972    7044 cli_runner.go:217] Completed: docker network inspect offline-docker-20220604161047-5712: (1.0969837s)
	I0604 16:11:01.374972    7044 network_create.go:275] error running [docker network inspect offline-docker-20220604161047-5712]: docker network inspect offline-docker-20220604161047-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20220604161047-5712
	I0604 16:11:01.374972    7044 network_create.go:277] output of [docker network inspect offline-docker-20220604161047-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20220604161047-5712
	
	** /stderr **
	I0604 16:11:01.382611    7044 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:11:02.703877    7044 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3212513s)
	I0604 16:11:02.732951    7044 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000110380] misses:0}
	I0604 16:11:02.732951    7044 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:11:02.732951    7044 network_create.go:115] attempt to create docker network offline-docker-20220604161047-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:11:02.743869    7044 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220604161047-5712
	W0604 16:11:03.921368    7044 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:03.921368    7044 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220604161047-5712: (1.176494s)
	E0604 16:11:03.921368    7044 network_create.go:104] error while trying to create docker network offline-docker-20220604161047-5712 192.168.49.0/24: create docker network offline-docker-20220604161047-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3538b17eed04abc9665d7c505cefaa129f7bc30f68bf45e927ce9df20762f3f8 (br-3538b17eed04): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:11:03.921368    7044 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220604161047-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3538b17eed04abc9665d7c505cefaa129f7bc30f68bf45e927ce9df20762f3f8 (br-3538b17eed04): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220604161047-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3538b17eed04abc9665d7c505cefaa129f7bc30f68bf45e927ce9df20762f3f8 (br-3538b17eed04): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:11:03.937361    7044 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:11:05.034597    7044 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0971707s)
	I0604 16:11:05.045766    7044 cli_runner.go:164] Run: docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:11:06.148250    7044 cli_runner.go:211] docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:11:06.148250    7044 cli_runner.go:217] Completed: docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: (1.1024727s)
	I0604 16:11:06.148250    7044 client.go:171] LocalClient.Create took 7.4670542s
	I0604 16:11:08.161840    7044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:11:08.166924    7044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712
	W0604 16:11:09.239271    7044 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:09.239313    7044 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: (1.0720598s)
	I0604 16:11:09.239567    7044 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:09.529288    7044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712
	W0604 16:11:10.605504    7044 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:10.605504    7044 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: (1.076205s)
	W0604 16:11:10.605504    7044 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	
	W0604 16:11:10.605504    7044 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:10.615023    7044 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:11:10.622506    7044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712
	W0604 16:11:11.678245    7044 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:11.678245    7044 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: (1.0557275s)
	I0604 16:11:11.678245    7044 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:11.990874    7044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712
	W0604 16:11:13.040309    7044 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:13.040309    7044 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: (1.0494228s)
	W0604 16:11:13.040309    7044 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	
	W0604 16:11:13.040309    7044 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:13.040309    7044 start.go:134] duration metric: createHost completed in 14.6483712s
	I0604 16:11:13.040309    7044 start.go:81] releasing machines lock for "offline-docker-20220604161047-5712", held for 14.648787s
	W0604 16:11:13.040309    7044 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for offline-docker-20220604161047-5712 container: docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220604161047-5712': mkdir /var/lib/docker/volumes/offline-docker-20220604161047-5712: read-only file system
	I0604 16:11:13.057575    7044 cli_runner.go:164] Run: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}
	W0604 16:11:14.125155    7044 cli_runner.go:211] docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:11:14.125155    7044 cli_runner.go:217] Completed: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: (1.0675685s)
	I0604 16:11:14.125155    7044 delete.go:82] Unable to get host status for offline-docker-20220604161047-5712, assuming it has already been deleted: state: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	W0604 16:11:14.125155    7044 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for offline-docker-20220604161047-5712 container: docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220604161047-5712': mkdir /var/lib/docker/volumes/offline-docker-20220604161047-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for offline-docker-20220604161047-5712 container: docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220604161047-5712': mkdir /var/lib/docker/volumes/offline-docker-20220604161047-5712: read-only file system
	
	I0604 16:11:14.125155    7044 start.go:614] Will try again in 5 seconds ...
	I0604 16:11:19.139331    7044 start.go:352] acquiring machines lock for offline-docker-20220604161047-5712: {Name:mk5ad59aa788c71c64e8030f87a0c48222def3a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:11:19.139591    7044 start.go:356] acquired machines lock for "offline-docker-20220604161047-5712" in 179.7µs
	I0604 16:11:19.139747    7044 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:11:19.139814    7044 fix.go:55] fixHost starting: 
	I0604 16:11:19.153143    7044 cli_runner.go:164] Run: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}
	W0604 16:11:20.208680    7044 cli_runner.go:211] docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:11:20.208754    7044 cli_runner.go:217] Completed: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: (1.0553135s)
	I0604 16:11:20.208754    7044 fix.go:103] recreateIfNeeded on offline-docker-20220604161047-5712: state= err=unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:20.208856    7044 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:11:20.322384    7044 out.go:177] * docker "offline-docker-20220604161047-5712" container is missing, will recreate.
	I0604 16:11:20.325312    7044 delete.go:124] DEMOLISHING offline-docker-20220604161047-5712 ...
	I0604 16:11:20.340926    7044 cli_runner.go:164] Run: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}
	W0604 16:11:21.409768    7044 cli_runner.go:211] docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:11:21.409768    7044 cli_runner.go:217] Completed: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: (1.0688299s)
	W0604 16:11:21.409768    7044 stop.go:75] unable to get state: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:21.409768    7044 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:21.426124    7044 cli_runner.go:164] Run: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}
	W0604 16:11:22.498213    7044 cli_runner.go:211] docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:11:22.498213    7044 cli_runner.go:217] Completed: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: (1.0720769s)
	I0604 16:11:22.498213    7044 delete.go:82] Unable to get host status for offline-docker-20220604161047-5712, assuming it has already been deleted: state: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:22.506108    7044 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-20220604161047-5712
	W0604 16:11:23.612441    7044 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:23.612441    7044 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} offline-docker-20220604161047-5712: (1.1063208s)
	I0604 16:11:23.612441    7044 kic.go:356] could not find the container offline-docker-20220604161047-5712 to remove it. will try anyways
	I0604 16:11:23.619850    7044 cli_runner.go:164] Run: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}
	W0604 16:11:24.729742    7044 cli_runner.go:211] docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:11:24.729894    7044 cli_runner.go:217] Completed: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: (1.1088801s)
	W0604 16:11:24.729952    7044 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:24.740747    7044 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-20220604161047-5712 /bin/bash -c "sudo init 0"
	W0604 16:11:25.817590    7044 cli_runner.go:211] docker exec --privileged -t offline-docker-20220604161047-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:11:25.817590    7044 cli_runner.go:217] Completed: docker exec --privileged -t offline-docker-20220604161047-5712 /bin/bash -c "sudo init 0": (1.0768309s)
	I0604 16:11:25.817590    7044 oci.go:625] error shutdown offline-docker-20220604161047-5712: docker exec --privileged -t offline-docker-20220604161047-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:26.834234    7044 cli_runner.go:164] Run: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}
	W0604 16:11:27.855274    7044 cli_runner.go:211] docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:11:27.855274    7044 cli_runner.go:217] Completed: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: (1.021029s)
	I0604 16:11:27.855274    7044 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:27.855274    7044 oci.go:639] temporary error: container offline-docker-20220604161047-5712 status is  but expect it to be exited
	I0604 16:11:27.855274    7044 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:28.337548    7044 cli_runner.go:164] Run: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}
	W0604 16:11:29.424245    7044 cli_runner.go:211] docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:11:29.424326    7044 cli_runner.go:217] Completed: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: (1.0865092s)
	I0604 16:11:29.424403    7044 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:29.424444    7044 oci.go:639] temporary error: container offline-docker-20220604161047-5712 status is  but expect it to be exited
	I0604 16:11:29.424499    7044 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:30.333841    7044 cli_runner.go:164] Run: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}
	W0604 16:11:31.395238    7044 cli_runner.go:211] docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:11:31.395238    7044 cli_runner.go:217] Completed: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: (1.0613853s)
	I0604 16:11:31.395238    7044 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:31.395238    7044 oci.go:639] temporary error: container offline-docker-20220604161047-5712 status is  but expect it to be exited
	I0604 16:11:31.395238    7044 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:32.042676    7044 cli_runner.go:164] Run: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}
	W0604 16:11:33.140137    7044 cli_runner.go:211] docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:11:33.140137    7044 cli_runner.go:217] Completed: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: (1.0973248s)
	I0604 16:11:33.140137    7044 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:33.140137    7044 oci.go:639] temporary error: container offline-docker-20220604161047-5712 status is  but expect it to be exited
	I0604 16:11:33.140137    7044 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:34.269186    7044 cli_runner.go:164] Run: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}
	W0604 16:11:35.333447    7044 cli_runner.go:211] docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:11:35.333623    7044 cli_runner.go:217] Completed: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: (1.0642495s)
	I0604 16:11:35.333659    7044 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:35.333659    7044 oci.go:639] temporary error: container offline-docker-20220604161047-5712 status is  but expect it to be exited
	I0604 16:11:35.333762    7044 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:36.865931    7044 cli_runner.go:164] Run: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}
	W0604 16:11:37.927451    7044 cli_runner.go:211] docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:11:37.927499    7044 cli_runner.go:217] Completed: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: (1.0613221s)
	I0604 16:11:37.927499    7044 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:37.927499    7044 oci.go:639] temporary error: container offline-docker-20220604161047-5712 status is  but expect it to be exited
	I0604 16:11:37.927499    7044 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:40.989697    7044 cli_runner.go:164] Run: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}
	W0604 16:11:42.024560    7044 cli_runner.go:211] docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:11:42.024560    7044 cli_runner.go:217] Completed: docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: (1.0348517s)
	I0604 16:11:42.024560    7044 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:42.024560    7044 oci.go:639] temporary error: container offline-docker-20220604161047-5712 status is  but expect it to be exited
	I0604 16:11:42.024560    7044 oci.go:88] couldn't shut down offline-docker-20220604161047-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	 
	I0604 16:11:42.031562    7044 cli_runner.go:164] Run: docker rm -f -v offline-docker-20220604161047-5712
	I0604 16:11:43.124908    7044 cli_runner.go:217] Completed: docker rm -f -v offline-docker-20220604161047-5712: (1.0933343s)
	I0604 16:11:43.131911    7044 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-20220604161047-5712
	W0604 16:11:44.245727    7044 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:44.245727    7044 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} offline-docker-20220604161047-5712: (1.1138038s)
	I0604 16:11:44.253645    7044 cli_runner.go:164] Run: docker network inspect offline-docker-20220604161047-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:11:45.362272    7044 cli_runner.go:211] docker network inspect offline-docker-20220604161047-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:11:45.362358    7044 cli_runner.go:217] Completed: docker network inspect offline-docker-20220604161047-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1084495s)
	I0604 16:11:45.370041    7044 network_create.go:272] running [docker network inspect offline-docker-20220604161047-5712] to gather additional debugging logs...
	I0604 16:11:45.370041    7044 cli_runner.go:164] Run: docker network inspect offline-docker-20220604161047-5712
	W0604 16:11:46.434329    7044 cli_runner.go:211] docker network inspect offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:46.434329    7044 cli_runner.go:217] Completed: docker network inspect offline-docker-20220604161047-5712: (1.0642768s)
	I0604 16:11:46.434329    7044 network_create.go:275] error running [docker network inspect offline-docker-20220604161047-5712]: docker network inspect offline-docker-20220604161047-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20220604161047-5712
	I0604 16:11:46.434329    7044 network_create.go:277] output of [docker network inspect offline-docker-20220604161047-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20220604161047-5712
	
	** /stderr **
	W0604 16:11:46.435459    7044 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:11:46.435712    7044 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:11:47.447148    7044 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:11:47.460425    7044 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:11:47.460425    7044 start.go:165] libmachine.API.Create for "offline-docker-20220604161047-5712" (driver="docker")
	I0604 16:11:47.460425    7044 client.go:168] LocalClient.Create starting
	I0604 16:11:47.461400    7044 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:11:47.461833    7044 main.go:134] libmachine: Decoding PEM data...
	I0604 16:11:47.461946    7044 main.go:134] libmachine: Parsing certificate...
	I0604 16:11:47.462113    7044 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:11:47.462415    7044 main.go:134] libmachine: Decoding PEM data...
	I0604 16:11:47.462415    7044 main.go:134] libmachine: Parsing certificate...
	I0604 16:11:47.470928    7044 cli_runner.go:164] Run: docker network inspect offline-docker-20220604161047-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:11:48.537877    7044 cli_runner.go:211] docker network inspect offline-docker-20220604161047-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:11:48.537950    7044 cli_runner.go:217] Completed: docker network inspect offline-docker-20220604161047-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0666108s)
	I0604 16:11:48.545908    7044 network_create.go:272] running [docker network inspect offline-docker-20220604161047-5712] to gather additional debugging logs...
	I0604 16:11:48.545908    7044 cli_runner.go:164] Run: docker network inspect offline-docker-20220604161047-5712
	W0604 16:11:49.654234    7044 cli_runner.go:211] docker network inspect offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:49.654234    7044 cli_runner.go:217] Completed: docker network inspect offline-docker-20220604161047-5712: (1.1083148s)
	I0604 16:11:49.654234    7044 network_create.go:275] error running [docker network inspect offline-docker-20220604161047-5712]: docker network inspect offline-docker-20220604161047-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20220604161047-5712
	I0604 16:11:49.654234    7044 network_create.go:277] output of [docker network inspect offline-docker-20220604161047-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20220604161047-5712
	
	** /stderr **
	I0604 16:11:49.662260    7044 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:11:50.769190    7044 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.105761s)
	I0604 16:11:50.786339    7044 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000110380] amended:false}} dirty:map[] misses:0}
	I0604 16:11:50.786339    7044 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:11:50.803685    7044 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000110380] amended:true}} dirty:map[192.168.49.0:0xc000110380 192.168.58.0:0xc00011e100] misses:0}
	I0604 16:11:50.803760    7044 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:11:50.803834    7044 network_create.go:115] attempt to create docker network offline-docker-20220604161047-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:11:50.810949    7044 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220604161047-5712
	W0604 16:11:51.962950    7044 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:51.962950    7044 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220604161047-5712: (1.151989s)
	E0604 16:11:51.962950    7044 network_create.go:104] error while trying to create docker network offline-docker-20220604161047-5712 192.168.58.0/24: create docker network offline-docker-20220604161047-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c73b568bee8c1f6c52eaf40212ea9f49f117b46e97170e391af3a43b9116ba06 (br-c73b568bee8c): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:11:51.962950    7044 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220604161047-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c73b568bee8c1f6c52eaf40212ea9f49f117b46e97170e391af3a43b9116ba06 (br-c73b568bee8c): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220604161047-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c73b568bee8c1f6c52eaf40212ea9f49f117b46e97170e391af3a43b9116ba06 (br-c73b568bee8c): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:11:51.979187    7044 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:11:53.064584    7044 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0853859s)
	I0604 16:11:53.072058    7044 cli_runner.go:164] Run: docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:11:54.194152    7044 cli_runner.go:211] docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:11:54.194152    7044 cli_runner.go:217] Completed: docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: (1.1220824s)
	I0604 16:11:54.194152    7044 client.go:171] LocalClient.Create took 6.7336541s
	I0604 16:11:56.216800    7044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:11:56.223654    7044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712
	W0604 16:11:57.306819    7044 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:57.306896    7044 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: (1.0829362s)
	I0604 16:11:57.307091    7044 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:57.651347    7044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712
	W0604 16:11:58.730631    7044 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:58.730631    7044 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: (1.0791914s)
	W0604 16:11:58.730966    7044 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	
	W0604 16:11:58.731049    7044 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:11:58.741053    7044 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:11:58.747040    7044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712
	W0604 16:11:59.826193    7044 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:11:59.826193    7044 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: (1.0791419s)
	I0604 16:11:59.826193    7044 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:12:00.055514    7044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712
	W0604 16:12:01.129583    7044 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:12:01.129583    7044 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: (1.0740581s)
	W0604 16:12:01.129583    7044 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	
	W0604 16:12:01.129583    7044 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:12:01.129583    7044 start.go:134] duration metric: createHost completed in 13.6820496s
	I0604 16:12:01.140325    7044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:12:01.147418    7044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712
	W0604 16:12:02.216863    7044 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:12:02.216863    7044 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: (1.0692751s)
	I0604 16:12:02.216863    7044 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:12:02.483004    7044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712
	W0604 16:12:03.562757    7044 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:12:03.562861    7044 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: (1.0796044s)
	W0604 16:12:03.562861    7044 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	
	W0604 16:12:03.562861    7044 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:12:03.572774    7044 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:12:03.580541    7044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712
	W0604 16:12:04.667824    7044 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:12:04.667824    7044 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: (1.0872709s)
	I0604 16:12:04.667824    7044 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:12:04.876355    7044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712
	W0604 16:12:05.947770    7044 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712 returned with exit code 1
	I0604 16:12:05.947770    7044 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: (1.0714032s)
	W0604 16:12:05.947770    7044 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	
	W0604 16:12:05.947770    7044 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220604161047-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220604161047-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712
	I0604 16:12:05.947770    7044 fix.go:57] fixHost completed within 46.807449s
	I0604 16:12:05.947770    7044 start.go:81] releasing machines lock for "offline-docker-20220604161047-5712", held for 46.8076714s
	W0604 16:12:05.948372    7044 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-20220604161047-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220604161047-5712 container: docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220604161047-5712': mkdir /var/lib/docker/volumes/offline-docker-20220604161047-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p offline-docker-20220604161047-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220604161047-5712 container: docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220604161047-5712': mkdir /var/lib/docker/volumes/offline-docker-20220604161047-5712: read-only file system
	
	I0604 16:12:06.421515    7044 out.go:177] 
	W0604 16:12:06.424899    7044 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220604161047-5712 container: docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220604161047-5712': mkdir /var/lib/docker/volumes/offline-docker-20220604161047-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220604161047-5712 container: docker volume create offline-docker-20220604161047-5712 --label name.minikube.sigs.k8s.io=offline-docker-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220604161047-5712': mkdir /var/lib/docker/volumes/offline-docker-20220604161047-5712: read-only file system
	
	W0604 16:12:06.425084    7044 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:12:06.425259    7044 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:12:06.429307    7044 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-windows-amd64.exe start -p offline-docker-20220604161047-5712 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker failed: exit status 60
panic.go:482: *** TestOffline FAILED at 2022-06-04 16:12:06.5708816 +0000 GMT m=+3131.765430901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-20220604161047-5712

                                                
                                                
=== CONT  TestOffline
helpers_test.go:231: (dbg) Non-zero exit: docker inspect offline-docker-20220604161047-5712: exit status 1 (1.164118s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: offline-docker-20220604161047-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-20220604161047-5712 -n offline-docker-20220604161047-5712

                                                
                                                
=== CONT  TestOffline
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-20220604161047-5712 -n offline-docker-20220604161047-5712: exit status 7 (2.9924375s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:12:10.700899    3620 status.go:247] status error: host: state: unknown state "offline-docker-20220604161047-5712": docker container inspect offline-docker-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220604161047-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-20220604161047-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-20220604161047-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20220604161047-5712

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20220604161047-5712: (8.9981669s)
--- FAIL: TestOffline (92.49s)

                                                
                                    
x
+
TestAddons/Setup (74.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20220604152202-5712 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p addons-20220604152202-5712 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 60 (1m14.4784019s)

                                                
                                                
-- stdout --
	* [addons-20220604152202-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node addons-20220604152202-5712 in cluster addons-20220604152202-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "addons-20220604152202-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:22:02.342013    6332 out.go:296] Setting OutFile to fd 636 ...
	I0604 15:22:02.396107    6332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:22:02.396107    6332 out.go:309] Setting ErrFile to fd 640...
	I0604 15:22:02.396107    6332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:22:02.419101    6332 out.go:303] Setting JSON to false
	I0604 15:22:02.421692    6332 start.go:115] hostinfo: {"hostname":"minikube2","uptime":7194,"bootTime":1654348928,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 15:22:02.421692    6332 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 15:22:02.425532    6332 out.go:177] * [addons-20220604152202-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 15:22:02.429115    6332 notify.go:193] Checking for updates...
	I0604 15:22:02.432138    6332 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 15:22:02.434460    6332 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 15:22:02.437566    6332 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 15:22:02.439676    6332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 15:22:02.442286    6332 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 15:22:05.006868    6332 docker.go:137] docker version: linux-20.10.16
	I0604 15:22:05.015877    6332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:22:06.964504    6332 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9484523s)
	I0604 15:22:06.965530    6332 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-04 15:22:06.0011876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:22:06.974577    6332 out.go:177] * Using the docker driver based on user configuration
	I0604 15:22:06.977363    6332 start.go:284] selected driver: docker
	I0604 15:22:06.977363    6332 start.go:806] validating driver "docker" against <nil>
	I0604 15:22:06.977363    6332 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 15:22:07.054163    6332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:22:09.012440    6332 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.958258s)
	I0604 15:22:09.012440    6332 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-04 15:22:08.0548661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:22:09.012440    6332 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 15:22:09.014304    6332 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 15:22:09.019108    6332 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 15:22:09.021937    6332 cni.go:95] Creating CNI manager for ""
	I0604 15:22:09.022033    6332 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 15:22:09.022033    6332 start_flags.go:306] config:
	{Name:addons-20220604152202-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:addons-20220604152202-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 15:22:09.025441    6332 out.go:177] * Starting control plane node addons-20220604152202-5712 in cluster addons-20220604152202-5712
	I0604 15:22:09.027673    6332 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 15:22:09.031883    6332 out.go:177] * Pulling base image ...
	I0604 15:22:09.037671    6332 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 15:22:09.037671    6332 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 15:22:09.037671    6332 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 15:22:09.037893    6332 cache.go:57] Caching tarball of preloaded images
	I0604 15:22:09.038071    6332 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 15:22:09.038071    6332 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 15:22:09.038854    6332 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-20220604152202-5712\config.json ...
	I0604 15:22:09.039111    6332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-20220604152202-5712\config.json: {Name:mkfdceb6ca0eb7743a25faa4cc2abb3e1ff40593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 15:22:10.098970    6332 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 15:22:10.098970    6332 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:22:10.098970    6332 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:22:10.098970    6332 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 15:22:10.098970    6332 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 15:22:10.098970    6332 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 15:22:10.098970    6332 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 15:22:10.098970    6332 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 15:22:10.098970    6332 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:22:12.310472    6332 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 15:22:12.311000    6332 cache.go:206] Successfully downloaded all kic artifacts
	I0604 15:22:12.311245    6332 start.go:352] acquiring machines lock for addons-20220604152202-5712: {Name:mk1e3f0b7f8b1333f5b22347b3edb834fe575b1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 15:22:12.311537    6332 start.go:356] acquired machines lock for "addons-20220604152202-5712" in 239.4µs
	I0604 15:22:12.311755    6332 start.go:91] Provisioning new machine with config: &{Name:addons-20220604152202-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:addons-20220604152202-5712 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 15:22:12.311919    6332 start.go:131] createHost starting for "" (driver="docker")
	I0604 15:22:12.318190    6332 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0604 15:22:12.318804    6332 start.go:165] libmachine.API.Create for "addons-20220604152202-5712" (driver="docker")
	I0604 15:22:12.318995    6332 client.go:168] LocalClient.Create starting
	I0604 15:22:12.319687    6332 main.go:134] libmachine: Creating CA: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 15:22:12.483878    6332 main.go:134] libmachine: Creating client certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 15:22:12.608095    6332 cli_runner.go:164] Run: docker network inspect addons-20220604152202-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:22:13.657150    6332 cli_runner.go:211] docker network inspect addons-20220604152202-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:22:13.657150    6332 cli_runner.go:217] Completed: docker network inspect addons-20220604152202-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0488471s)
	I0604 15:22:13.665493    6332 network_create.go:272] running [docker network inspect addons-20220604152202-5712] to gather additional debugging logs...
	I0604 15:22:13.665493    6332 cli_runner.go:164] Run: docker network inspect addons-20220604152202-5712
	W0604 15:22:14.700301    6332 cli_runner.go:211] docker network inspect addons-20220604152202-5712 returned with exit code 1
	I0604 15:22:14.700301    6332 cli_runner.go:217] Completed: docker network inspect addons-20220604152202-5712: (1.0347976s)
	I0604 15:22:14.700301    6332 network_create.go:275] error running [docker network inspect addons-20220604152202-5712]: docker network inspect addons-20220604152202-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220604152202-5712
	I0604 15:22:14.700301    6332 network_create.go:277] output of [docker network inspect addons-20220604152202-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220604152202-5712
	
	** /stderr **
	I0604 15:22:14.700301    6332 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 15:22:15.732947    6332 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0325058s)
	I0604 15:22:15.753385    6332 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014e488] misses:0}
	I0604 15:22:15.753385    6332 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:22:15.753385    6332 network_create.go:115] attempt to create docker network addons-20220604152202-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 15:22:15.761956    6332 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220604152202-5712
	W0604 15:22:16.866298    6332 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220604152202-5712 returned with exit code 1
	I0604 15:22:16.866298    6332 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220604152202-5712: (1.1043311s)
	E0604 15:22:16.866298    6332 network_create.go:104] error while trying to create docker network addons-20220604152202-5712 192.168.49.0/24: create docker network addons-20220604152202-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220604152202-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	W0604 15:22:16.866298    6332 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220604152202-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220604152202-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220604152202-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220604152202-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	I0604 15:22:16.881420    6332 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 15:22:17.888520    6332 cli_runner.go:164] Run: docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 15:22:18.939248    6332 cli_runner.go:211] docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 15:22:18.939248    6332 cli_runner.go:217] Completed: docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0506102s)
	I0604 15:22:18.939533    6332 client.go:171] LocalClient.Create took 6.6204718s
	I0604 15:22:20.957827    6332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:22:20.966960    6332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712
	W0604 15:22:21.979143    6332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712 returned with exit code 1
	I0604 15:22:21.979143    6332 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: (1.0121734s)
	I0604 15:22:21.979143    6332 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:22.272151    6332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712
	W0604 15:22:23.314481    6332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712 returned with exit code 1
	I0604 15:22:23.314643    6332 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: (1.0422567s)
	W0604 15:22:23.314643    6332 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	
	W0604 15:22:23.314643    6332 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:23.326131    6332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:22:23.333018    6332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712
	W0604 15:22:24.376828    6332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712 returned with exit code 1
	I0604 15:22:24.376971    6332 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: (1.0437996s)
	I0604 15:22:24.376971    6332 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:24.677111    6332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712
	W0604 15:22:25.700597    6332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712 returned with exit code 1
	I0604 15:22:25.700597    6332 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: (1.0234766s)
	W0604 15:22:25.700597    6332 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	
	W0604 15:22:25.700597    6332 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:25.700597    6332 start.go:134] duration metric: createHost completed in 13.3885443s
	I0604 15:22:25.700597    6332 start.go:81] releasing machines lock for "addons-20220604152202-5712", held for 13.3889027s
	W0604 15:22:25.700597    6332 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for addons-20220604152202-5712 container: docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220604152202-5712: error while creating volume root path '/var/lib/docker/volumes/addons-20220604152202-5712': mkdir /var/lib/docker/volumes/addons-20220604152202-5712: read-only file system
	I0604 15:22:25.716052    6332 cli_runner.go:164] Run: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}
	W0604 15:22:26.755935    6332 cli_runner.go:211] docker container inspect addons-20220604152202-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:22:26.755935    6332 cli_runner.go:217] Completed: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: (1.0398726s)
	I0604 15:22:26.755935    6332 delete.go:82] Unable to get host status for addons-20220604152202-5712, assuming it has already been deleted: state: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	W0604 15:22:26.756615    6332 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for addons-20220604152202-5712 container: docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220604152202-5712: error while creating volume root path '/var/lib/docker/volumes/addons-20220604152202-5712': mkdir /var/lib/docker/volumes/addons-20220604152202-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for addons-20220604152202-5712 container: docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220604152202-5712: error while creating volume root path '/var/lib/docker/volumes/addons-20220604152202-5712': mkdir /var/lib/docker/volumes/addons-20220604152202-5712: read-only file system
	
	I0604 15:22:26.756615    6332 start.go:614] Will try again in 5 seconds ...
	I0604 15:22:31.762838    6332 start.go:352] acquiring machines lock for addons-20220604152202-5712: {Name:mk1e3f0b7f8b1333f5b22347b3edb834fe575b1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 15:22:31.762838    6332 start.go:356] acquired machines lock for "addons-20220604152202-5712" in 0s
	I0604 15:22:31.763532    6332 start.go:94] Skipping create...Using existing machine configuration
	I0604 15:22:31.763532    6332 fix.go:55] fixHost starting: 
	I0604 15:22:31.777017    6332 cli_runner.go:164] Run: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}
	W0604 15:22:32.780790    6332 cli_runner.go:211] docker container inspect addons-20220604152202-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:22:32.780790    6332 cli_runner.go:217] Completed: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: (1.0037622s)
	I0604 15:22:32.780790    6332 fix.go:103] recreateIfNeeded on addons-20220604152202-5712: state= err=unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:32.780790    6332 fix.go:108] machineExists: false. err=machine does not exist
	I0604 15:22:32.784607    6332 out.go:177] * docker "addons-20220604152202-5712" container is missing, will recreate.
	I0604 15:22:32.788488    6332 delete.go:124] DEMOLISHING addons-20220604152202-5712 ...
	I0604 15:22:32.801780    6332 cli_runner.go:164] Run: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}
	W0604 15:22:33.825682    6332 cli_runner.go:211] docker container inspect addons-20220604152202-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:22:33.825682    6332 cli_runner.go:217] Completed: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: (1.0226457s)
	W0604 15:22:33.825682    6332 stop.go:75] unable to get state: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:33.826004    6332 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:33.843756    6332 cli_runner.go:164] Run: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}
	W0604 15:22:34.870891    6332 cli_runner.go:211] docker container inspect addons-20220604152202-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:22:34.870943    6332 cli_runner.go:217] Completed: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: (1.0270306s)
	I0604 15:22:34.871064    6332 delete.go:82] Unable to get host status for addons-20220604152202-5712, assuming it has already been deleted: state: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:34.879082    6332 cli_runner.go:164] Run: docker container inspect -f {{.Id}} addons-20220604152202-5712
	W0604 15:22:35.916164    6332 cli_runner.go:211] docker container inspect -f {{.Id}} addons-20220604152202-5712 returned with exit code 1
	I0604 15:22:35.916164    6332 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} addons-20220604152202-5712: (1.0367989s)
	I0604 15:22:35.916321    6332 kic.go:356] could not find the container addons-20220604152202-5712 to remove it. will try anyways
	I0604 15:22:35.923234    6332 cli_runner.go:164] Run: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}
	W0604 15:22:36.932695    6332 cli_runner.go:211] docker container inspect addons-20220604152202-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:22:36.932695    6332 cli_runner.go:217] Completed: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: (1.0084414s)
	W0604 15:22:36.932695    6332 oci.go:84] error getting container status, will try to delete anyways: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:36.940982    6332 cli_runner.go:164] Run: docker exec --privileged -t addons-20220604152202-5712 /bin/bash -c "sudo init 0"
	W0604 15:22:37.930110    6332 cli_runner.go:211] docker exec --privileged -t addons-20220604152202-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 15:22:37.930110    6332 oci.go:625] error shutdown addons-20220604152202-5712: docker exec --privileged -t addons-20220604152202-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:38.949539    6332 cli_runner.go:164] Run: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}
	W0604 15:22:39.956045    6332 cli_runner.go:211] docker container inspect addons-20220604152202-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:22:39.956198    6332 cli_runner.go:217] Completed: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: (1.0053476s)
	I0604 15:22:39.956198    6332 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:39.956198    6332 oci.go:639] temporary error: container addons-20220604152202-5712 status is  but expect it to be exited
	I0604 15:22:39.956198    6332 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:40.440926    6332 cli_runner.go:164] Run: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}
	W0604 15:22:41.445143    6332 cli_runner.go:211] docker container inspect addons-20220604152202-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:22:41.445143    6332 cli_runner.go:217] Completed: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: (1.0042074s)
	I0604 15:22:41.445143    6332 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:41.445143    6332 oci.go:639] temporary error: container addons-20220604152202-5712 status is  but expect it to be exited
	I0604 15:22:41.445143    6332 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:42.358935    6332 cli_runner.go:164] Run: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}
	W0604 15:22:43.368484    6332 cli_runner.go:211] docker container inspect addons-20220604152202-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:22:43.368484    6332 cli_runner.go:217] Completed: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: (1.0094532s)
	I0604 15:22:43.368484    6332 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:43.368484    6332 oci.go:639] temporary error: container addons-20220604152202-5712 status is  but expect it to be exited
	I0604 15:22:43.368484    6332 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:44.027676    6332 cli_runner.go:164] Run: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}
	W0604 15:22:45.048115    6332 cli_runner.go:211] docker container inspect addons-20220604152202-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:22:45.048115    6332 cli_runner.go:217] Completed: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: (1.020429s)
	I0604 15:22:45.048115    6332 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:45.048115    6332 oci.go:639] temporary error: container addons-20220604152202-5712 status is  but expect it to be exited
	I0604 15:22:45.048115    6332 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:46.177562    6332 cli_runner.go:164] Run: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}
	W0604 15:22:47.169156    6332 cli_runner.go:211] docker container inspect addons-20220604152202-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:22:47.169156    6332 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:47.169156    6332 oci.go:639] temporary error: container addons-20220604152202-5712 status is  but expect it to be exited
	I0604 15:22:47.169156    6332 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:48.696522    6332 cli_runner.go:164] Run: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}
	W0604 15:22:49.722768    6332 cli_runner.go:211] docker container inspect addons-20220604152202-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:22:49.722768    6332 cli_runner.go:217] Completed: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: (1.0262359s)
	I0604 15:22:49.722768    6332 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:49.722768    6332 oci.go:639] temporary error: container addons-20220604152202-5712 status is  but expect it to be exited
	I0604 15:22:49.722768    6332 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:52.785102    6332 cli_runner.go:164] Run: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}
	W0604 15:22:53.851083    6332 cli_runner.go:211] docker container inspect addons-20220604152202-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:22:53.851212    6332 cli_runner.go:217] Completed: docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: (1.0659011s)
	I0604 15:22:53.851250    6332 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:22:53.851250    6332 oci.go:639] temporary error: container addons-20220604152202-5712 status is  but expect it to be exited
	I0604 15:22:53.851250    6332 oci.go:88] couldn't shut down addons-20220604152202-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "addons-20220604152202-5712": docker container inspect addons-20220604152202-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	 
	I0604 15:22:53.859089    6332 cli_runner.go:164] Run: docker rm -f -v addons-20220604152202-5712
	I0604 15:22:54.893573    6332 cli_runner.go:217] Completed: docker rm -f -v addons-20220604152202-5712: (1.034318s)
	I0604 15:22:54.901360    6332 cli_runner.go:164] Run: docker container inspect -f {{.Id}} addons-20220604152202-5712
	W0604 15:22:55.910513    6332 cli_runner.go:211] docker container inspect -f {{.Id}} addons-20220604152202-5712 returned with exit code 1
	I0604 15:22:55.910513    6332 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} addons-20220604152202-5712: (1.0091432s)
	I0604 15:22:55.918196    6332 cli_runner.go:164] Run: docker network inspect addons-20220604152202-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:22:56.915747    6332 cli_runner.go:211] docker network inspect addons-20220604152202-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:22:56.943310    6332 network_create.go:272] running [docker network inspect addons-20220604152202-5712] to gather additional debugging logs...
	I0604 15:22:56.943310    6332 cli_runner.go:164] Run: docker network inspect addons-20220604152202-5712
	W0604 15:22:57.948705    6332 cli_runner.go:211] docker network inspect addons-20220604152202-5712 returned with exit code 1
	I0604 15:22:57.948705    6332 cli_runner.go:217] Completed: docker network inspect addons-20220604152202-5712: (1.0053848s)
	I0604 15:22:57.948705    6332 network_create.go:275] error running [docker network inspect addons-20220604152202-5712]: docker network inspect addons-20220604152202-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220604152202-5712
	I0604 15:22:57.948705    6332 network_create.go:277] output of [docker network inspect addons-20220604152202-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220604152202-5712
	
	** /stderr **
	W0604 15:22:57.950110    6332 delete.go:139] delete failed (probably ok) <nil>
	I0604 15:22:57.950180    6332 fix.go:115] Sleeping 1 second for extra luck!
	I0604 15:22:58.959364    6332 start.go:131] createHost starting for "" (driver="docker")
	I0604 15:22:58.966762    6332 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0604 15:22:58.966762    6332 start.go:165] libmachine.API.Create for "addons-20220604152202-5712" (driver="docker")
	I0604 15:22:58.966762    6332 client.go:168] LocalClient.Create starting
	I0604 15:22:58.967610    6332 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 15:22:58.967610    6332 main.go:134] libmachine: Decoding PEM data...
	I0604 15:22:58.967610    6332 main.go:134] libmachine: Parsing certificate...
	I0604 15:22:58.967610    6332 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 15:22:58.967610    6332 main.go:134] libmachine: Decoding PEM data...
	I0604 15:22:58.967610    6332 main.go:134] libmachine: Parsing certificate...
	I0604 15:22:58.976604    6332 cli_runner.go:164] Run: docker network inspect addons-20220604152202-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:23:00.000388    6332 cli_runner.go:211] docker network inspect addons-20220604152202-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:23:00.000388    6332 cli_runner.go:217] Completed: docker network inspect addons-20220604152202-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0237735s)
	I0604 15:23:00.007565    6332 network_create.go:272] running [docker network inspect addons-20220604152202-5712] to gather additional debugging logs...
	I0604 15:23:00.008553    6332 cli_runner.go:164] Run: docker network inspect addons-20220604152202-5712
	W0604 15:23:01.002221    6332 cli_runner.go:211] docker network inspect addons-20220604152202-5712 returned with exit code 1
	I0604 15:23:01.002286    6332 network_create.go:275] error running [docker network inspect addons-20220604152202-5712]: docker network inspect addons-20220604152202-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220604152202-5712
	I0604 15:23:01.002286    6332 network_create.go:277] output of [docker network inspect addons-20220604152202-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220604152202-5712
	
	** /stderr **
	I0604 15:23:01.009885    6332 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 15:23:02.022742    6332 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e488] amended:false}} dirty:map[] misses:0}
	I0604 15:23:02.023715    6332 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:23:02.033703    6332 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e488] amended:true}} dirty:map[192.168.49.0:0xc00014e488 192.168.58.0:0xc0005582c0] misses:0}
	I0604 15:23:02.033703    6332 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:23:02.033703    6332 network_create.go:115] attempt to create docker network addons-20220604152202-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 15:23:02.047926    6332 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220604152202-5712
	W0604 15:23:03.137755    6332 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220604152202-5712 returned with exit code 1
	I0604 15:23:03.137755    6332 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220604152202-5712: (1.0898173s)
	E0604 15:23:03.137755    6332 network_create.go:104] error while trying to create docker network addons-20220604152202-5712 192.168.58.0/24: create docker network addons-20220604152202-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220604152202-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	W0604 15:23:03.137755    6332 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220604152202-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220604152202-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220604152202-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220604152202-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	I0604 15:23:03.151747    6332 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 15:23:04.218605    6332 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.066656s)
	I0604 15:23:04.225651    6332 cli_runner.go:164] Run: docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 15:23:05.279169    6332 cli_runner.go:211] docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 15:23:05.279339    6332 cli_runner.go:217] Completed: docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0534742s)
	I0604 15:23:05.279339    6332 client.go:171] LocalClient.Create took 6.3125134s
	I0604 15:23:07.300099    6332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:23:07.306856    6332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712
	W0604 15:23:08.351171    6332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712 returned with exit code 1
	I0604 15:23:08.351171    6332 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: (1.0441894s)
	I0604 15:23:08.351371    6332 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:23:08.695691    6332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712
	W0604 15:23:09.746270    6332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712 returned with exit code 1
	I0604 15:23:09.746270    6332 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: (1.0505676s)
	W0604 15:23:09.746270    6332 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	
	W0604 15:23:09.746270    6332 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:23:09.757039    6332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:23:09.763742    6332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712
	W0604 15:23:10.786623    6332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712 returned with exit code 1
	I0604 15:23:10.786623    6332 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: (1.0226004s)
	I0604 15:23:10.786623    6332 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:23:11.017615    6332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712
	W0604 15:23:12.012987    6332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712 returned with exit code 1
	W0604 15:23:12.012987    6332 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	
	W0604 15:23:12.012987    6332 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:23:12.012987    6332 start.go:134] duration metric: createHost completed in 13.053493s
	I0604 15:23:12.024610    6332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:23:12.031117    6332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712
	W0604 15:23:13.015699    6332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712 returned with exit code 1
	I0604 15:23:13.016125    6332 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:23:13.277794    6332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712
	W0604 15:23:14.291296    6332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712 returned with exit code 1
	I0604 15:23:14.291346    6332 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: (1.0134925s)
	W0604 15:23:14.291556    6332 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	
	W0604 15:23:14.291582    6332 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:23:14.302654    6332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:23:14.310374    6332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712
	W0604 15:23:15.318026    6332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712 returned with exit code 1
	I0604 15:23:15.318095    6332 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: (1.0074811s)
	I0604 15:23:15.318315    6332 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:23:15.537319    6332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712
	W0604 15:23:16.545386    6332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712 returned with exit code 1
	I0604 15:23:16.545386    6332 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: (1.0079478s)
	W0604 15:23:16.545557    6332 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	
	W0604 15:23:16.545557    6332 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220604152202-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220604152202-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220604152202-5712
	I0604 15:23:16.545557    6332 fix.go:57] fixHost completed within 44.7815774s
	I0604 15:23:16.545557    6332 start.go:81] releasing machines lock for "addons-20220604152202-5712", held for 44.7817097s
	W0604 15:23:16.546604    6332 out.go:239] * Failed to start docker container. Running "minikube delete -p addons-20220604152202-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220604152202-5712 container: docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220604152202-5712: error while creating volume root path '/var/lib/docker/volumes/addons-20220604152202-5712': mkdir /var/lib/docker/volumes/addons-20220604152202-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p addons-20220604152202-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220604152202-5712 container: docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220604152202-5712: error while creating volume root path '/var/lib/docker/volumes/addons-20220604152202-5712': mkdir /var/lib/docker/volumes/addons-20220604152202-5712: read-only file system
	
	I0604 15:23:16.551191    6332 out.go:177] 
	W0604 15:23:16.553097    6332 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220604152202-5712 container: docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220604152202-5712: error while creating volume root path '/var/lib/docker/volumes/addons-20220604152202-5712': mkdir /var/lib/docker/volumes/addons-20220604152202-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220604152202-5712 container: docker volume create addons-20220604152202-5712 --label name.minikube.sigs.k8s.io=addons-20220604152202-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220604152202-5712: error while creating volume root path '/var/lib/docker/volumes/addons-20220604152202-5712': mkdir /var/lib/docker/volumes/addons-20220604152202-5712: read-only file system
	
	W0604 15:23:16.553097    6332 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 15:23:16.553630    6332 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 15:23:16.557631    6332 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:77: out/minikube-windows-amd64.exe start -p addons-20220604152202-5712 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 60
--- FAIL: TestAddons/Setup (74.58s)

                                                
                                    
x
+
TestCertOptions (97.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20220604161736-5712 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-options-20220604161736-5712 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: exit status 60 (1m17.0237554s)

                                                
                                                
-- stdout --
	* [cert-options-20220604161736-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cert-options-20220604161736-5712 in cluster cert-options-20220604161736-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-options-20220604161736-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:17:51.388316    4172 network_create.go:104] error while trying to create docker network cert-options-20220604161736-5712 192.168.49.0/24: create docker network cert-options-20220604161736-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220604161736-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6b207b3aaaa086519ce9e9fe6d330e46892a71211dbf3311c7aab37cdd9d787c (br-6b207b3aaaa0): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-options-20220604161736-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220604161736-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6b207b3aaaa086519ce9e9fe6d330e46892a71211dbf3311c7aab37cdd9d787c (br-6b207b3aaaa0): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cert-options-20220604161736-5712 container: docker volume create cert-options-20220604161736-5712 --label name.minikube.sigs.k8s.io=cert-options-20220604161736-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-options-20220604161736-5712: error while creating volume root path '/var/lib/docker/volumes/cert-options-20220604161736-5712': mkdir /var/lib/docker/volumes/cert-options-20220604161736-5712: read-only file system
	
	E0604 16:18:39.625431    4172 network_create.go:104] error while trying to create docker network cert-options-20220604161736-5712 192.168.58.0/24: create docker network cert-options-20220604161736-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220604161736-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network edf8f1aae92d14e115df9501816e3b80077260e0c8e37d8b1d8103ff11367dd9 (br-edf8f1aae92d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-options-20220604161736-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220604161736-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network edf8f1aae92d14e115df9501816e3b80077260e0c8e37d8b1d8103ff11367dd9 (br-edf8f1aae92d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-options-20220604161736-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-options-20220604161736-5712 container: docker volume create cert-options-20220604161736-5712 --label name.minikube.sigs.k8s.io=cert-options-20220604161736-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-options-20220604161736-5712: error while creating volume root path '/var/lib/docker/volumes/cert-options-20220604161736-5712': mkdir /var/lib/docker/volumes/cert-options-20220604161736-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-options-20220604161736-5712 container: docker volume create cert-options-20220604161736-5712 --label name.minikube.sigs.k8s.io=cert-options-20220604161736-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-options-20220604161736-5712: error while creating volume root path '/var/lib/docker/volumes/cert-options-20220604161736-5712': mkdir /var/lib/docker/volumes/cert-options-20220604161736-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-options-20220604161736-5712 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost" : exit status 60
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20220604161736-5712 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p cert-options-20220604161736-5712 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 80 (3.2202279s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20220604161736-5712": docker container inspect cert-options-20220604161736-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220604161736-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_7b8531d53ef9e7bbc6fc851111559258d7d600b6_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-windows-amd64.exe -p cert-options-20220604161736-5712 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 80
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:82: failed to inspect container for the port get port 8555 for "cert-options-20220604161736-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-20220604161736-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: cert-options-20220604161736-5712
cert_options_test.go:85: expected to get a non-zero forwarded port but got 0
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-20220604161736-5712 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p cert-options-20220604161736-5712 -- "sudo cat /etc/kubernetes/admin.conf": exit status 80 (3.1989047s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20220604161736-5712": docker container inspect cert-options-20220604161736-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220604161736-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_bf4b0acc5ddf49539e7b1dcbc83bd1916f9eb405_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-windows-amd64.exe ssh -p cert-options-20220604161736-5712 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 80
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not containe the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20220604161736-5712": docker container inspect cert-options-20220604161736-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220604161736-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_bf4b0acc5ddf49539e7b1dcbc83bd1916f9eb405_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2022-06-04 16:19:01.0718342 +0000 GMT m=+3546.261952001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-options-20220604161736-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect cert-options-20220604161736-5712: exit status 1 (1.1492148s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: cert-options-20220604161736-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20220604161736-5712 -n cert-options-20220604161736-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20220604161736-5712 -n cert-options-20220604161736-5712: exit status 7 (3.0552236s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:19:05.258264    6256 status.go:247] status error: host: state: unknown state "cert-options-20220604161736-5712": docker container inspect cert-options-20220604161736-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220604161736-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-20220604161736-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-options-20220604161736-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20220604161736-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20220604161736-5712: (8.4469126s)
--- FAIL: TestCertOptions (97.20s)

                                                
                                    
x
+
TestCertExpiration (384.24s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220604161540-5712 --memory=2048 --cert-expiration=3m --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-20220604161540-5712 --memory=2048 --cert-expiration=3m --driver=docker: exit status 60 (1m17.9764856s)

                                                
                                                
-- stdout --
	* [cert-expiration-20220604161540-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cert-expiration-20220604161540-5712 in cluster cert-expiration-20220604161540-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220604161540-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:15:56.327563    3732 network_create.go:104] error while trying to create docker network cert-expiration-20220604161540-5712 192.168.49.0/24: create docker network cert-expiration-20220604161540-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220604161540-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network af0483e095632d961dde851f95199675e0a36b73f897740714cd6e26ec393278 (br-af0483e09563): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220604161540-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220604161540-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network af0483e095632d961dde851f95199675e0a36b73f897740714cd6e26ec393278 (br-af0483e09563): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220604161540-5712 container: docker volume create cert-expiration-20220604161540-5712 --label name.minikube.sigs.k8s.io=cert-expiration-20220604161540-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220604161540-5712: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220604161540-5712': mkdir /var/lib/docker/volumes/cert-expiration-20220604161540-5712: read-only file system
	
	E0604 16:16:44.684387    3732 network_create.go:104] error while trying to create docker network cert-expiration-20220604161540-5712 192.168.58.0/24: create docker network cert-expiration-20220604161540-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220604161540-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 84a82665688232f5374b1312c63210fea16ec0e81eee8264da8cc8d8fa9e8159 (br-84a826656882): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220604161540-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220604161540-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 84a82665688232f5374b1312c63210fea16ec0e81eee8264da8cc8d8fa9e8159 (br-84a826656882): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220604161540-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220604161540-5712 container: docker volume create cert-expiration-20220604161540-5712 --label name.minikube.sigs.k8s.io=cert-expiration-20220604161540-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220604161540-5712: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220604161540-5712': mkdir /var/lib/docker/volumes/cert-expiration-20220604161540-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220604161540-5712 container: docker volume create cert-expiration-20220604161540-5712 --label name.minikube.sigs.k8s.io=cert-expiration-20220604161540-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220604161540-5712: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220604161540-5712': mkdir /var/lib/docker/volumes/cert-expiration-20220604161540-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-expiration-20220604161540-5712 --memory=2048 --cert-expiration=3m --driver=docker" : exit status 60

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220604161540-5712 --memory=2048 --cert-expiration=8760h --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-20220604161540-5712 --memory=2048 --cert-expiration=8760h --driver=docker: exit status 60 (1m53.7019756s)

                                                
                                                
-- stdout --
	* [cert-expiration-20220604161540-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20220604161540-5712 in cluster cert-expiration-20220604161540-5712
	* Pulling base image ...
	* docker "cert-expiration-20220604161540-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220604161540-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:45.820021    5488 network_create.go:104] error while trying to create docker network cert-expiration-20220604161540-5712 192.168.49.0/24: create docker network cert-expiration-20220604161540-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220604161540-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e901c5692570cf20ca75fc4d12ad8b40785b22d8030c4f5cf62b164f53ead28b (br-e901c5692570): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220604161540-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220604161540-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e901c5692570cf20ca75fc4d12ad8b40785b22d8030c4f5cf62b164f53ead28b (br-e901c5692570): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220604161540-5712 container: docker volume create cert-expiration-20220604161540-5712 --label name.minikube.sigs.k8s.io=cert-expiration-20220604161540-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220604161540-5712: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220604161540-5712': mkdir /var/lib/docker/volumes/cert-expiration-20220604161540-5712: read-only file system
	
	E0604 16:21:38.268648    5488 network_create.go:104] error while trying to create docker network cert-expiration-20220604161540-5712 192.168.58.0/24: create docker network cert-expiration-20220604161540-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220604161540-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e680a5749716c5037ddb612cf19fa7991573fe4e7b90d9dd21435597041dcd9b (br-e680a5749716): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220604161540-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220604161540-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e680a5749716c5037ddb612cf19fa7991573fe4e7b90d9dd21435597041dcd9b (br-e680a5749716): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220604161540-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220604161540-5712 container: docker volume create cert-expiration-20220604161540-5712 --label name.minikube.sigs.k8s.io=cert-expiration-20220604161540-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220604161540-5712: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220604161540-5712': mkdir /var/lib/docker/volumes/cert-expiration-20220604161540-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220604161540-5712 container: docker volume create cert-expiration-20220604161540-5712 --label name.minikube.sigs.k8s.io=cert-expiration-20220604161540-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220604161540-5712: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220604161540-5712': mkdir /var/lib/docker/volumes/cert-expiration-20220604161540-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-windows-amd64.exe start -p cert-expiration-20220604161540-5712 --memory=2048 --cert-expiration=8760h --driver=docker" : exit status 60
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-20220604161540-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20220604161540-5712 in cluster cert-expiration-20220604161540-5712
	* Pulling base image ...
	* docker "cert-expiration-20220604161540-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220604161540-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:45.820021    5488 network_create.go:104] error while trying to create docker network cert-expiration-20220604161540-5712 192.168.49.0/24: create docker network cert-expiration-20220604161540-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220604161540-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e901c5692570cf20ca75fc4d12ad8b40785b22d8030c4f5cf62b164f53ead28b (br-e901c5692570): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220604161540-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220604161540-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e901c5692570cf20ca75fc4d12ad8b40785b22d8030c4f5cf62b164f53ead28b (br-e901c5692570): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220604161540-5712 container: docker volume create cert-expiration-20220604161540-5712 --label name.minikube.sigs.k8s.io=cert-expiration-20220604161540-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220604161540-5712: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220604161540-5712': mkdir /var/lib/docker/volumes/cert-expiration-20220604161540-5712: read-only file system
	
	E0604 16:21:38.268648    5488 network_create.go:104] error while trying to create docker network cert-expiration-20220604161540-5712 192.168.58.0/24: create docker network cert-expiration-20220604161540-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220604161540-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e680a5749716c5037ddb612cf19fa7991573fe4e7b90d9dd21435597041dcd9b (br-e680a5749716): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220604161540-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220604161540-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e680a5749716c5037ddb612cf19fa7991573fe4e7b90d9dd21435597041dcd9b (br-e680a5749716): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220604161540-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220604161540-5712 container: docker volume create cert-expiration-20220604161540-5712 --label name.minikube.sigs.k8s.io=cert-expiration-20220604161540-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220604161540-5712: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220604161540-5712': mkdir /var/lib/docker/volumes/cert-expiration-20220604161540-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220604161540-5712 container: docker volume create cert-expiration-20220604161540-5712 --label name.minikube.sigs.k8s.io=cert-expiration-20220604161540-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220604161540-5712: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220604161540-5712': mkdir /var/lib/docker/volumes/cert-expiration-20220604161540-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2022-06-04 16:21:52.5019874 +0000 GMT m=+3717.690251201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertExpiration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-expiration-20220604161540-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect cert-expiration-20220604161540-5712: exit status 1 (1.1312988s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: cert-expiration-20220604161540-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-20220604161540-5712 -n cert-expiration-20220604161540-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-20220604161540-5712 -n cert-expiration-20220604161540-5712: exit status 7 (3.0350603s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:21:56.660505    6160 status.go:247] status error: host: state: unknown state "cert-expiration-20220604161540-5712": docker container inspect cert-expiration-20220604161540-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-expiration-20220604161540-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-20220604161540-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-expiration-20220604161540-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-20220604161540-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-20220604161540-5712: (8.3818366s)
--- FAIL: TestCertExpiration (384.24s)

                                                
                                    
x
+
TestDockerFlags (97.09s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20220604161559-5712 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p docker-flags-20220604161559-5712 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: exit status 60 (1m18.2069019s)

                                                
                                                
-- stdout --
	* [docker-flags-20220604161559-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node docker-flags-20220604161559-5712 in cluster docker-flags-20220604161559-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-20220604161559-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:15:59.675366    5552 out.go:296] Setting OutFile to fd 1584 ...
	I0604 16:15:59.729363    5552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:15:59.729363    5552 out.go:309] Setting ErrFile to fd 1880...
	I0604 16:15:59.729363    5552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:15:59.740363    5552 out.go:303] Setting JSON to false
	I0604 16:15:59.742370    5552 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10431,"bootTime":1654348928,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:15:59.742370    5552 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:15:59.750369    5552 out.go:177] * [docker-flags-20220604161559-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:15:59.753368    5552 notify.go:193] Checking for updates...
	I0604 16:15:59.756368    5552 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:15:59.758363    5552 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:15:59.760368    5552 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:15:59.763364    5552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:15:59.765364    5552 config.go:178] Loaded profile config "NoKubernetes-20220604161047-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0604 16:15:59.766363    5552 config.go:178] Loaded profile config "cert-expiration-20220604161540-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:15:59.766363    5552 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:15:59.766363    5552 config.go:178] Loaded profile config "pause-20220604161529-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:15:59.766363    5552 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:16:02.450406    5552 docker.go:137] docker version: linux-20.10.16
	I0604 16:16:02.450406    5552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:16:04.564897    5552 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1144683s)
	I0604 16:16:04.564897    5552 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:16:03.540673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:16:04.569491    5552 out.go:177] * Using the docker driver based on user configuration
	I0604 16:16:04.572516    5552 start.go:284] selected driver: docker
	I0604 16:16:04.572516    5552 start.go:806] validating driver "docker" against <nil>
	I0604 16:16:04.572516    5552 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:16:04.645705    5552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:16:06.754471    5552 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1087428s)
	I0604 16:16:06.754471    5552 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:16:05.7413333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:16:06.754471    5552 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:16:06.755476    5552 start_flags.go:842] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0604 16:16:06.758473    5552 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:16:06.760472    5552 cni.go:95] Creating CNI manager for ""
	I0604 16:16:06.760472    5552 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:16:06.760472    5552 start_flags.go:306] config:
	{Name:docker-flags-20220604161559-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:docker-flags-20220604161559-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:16:06.763483    5552 out.go:177] * Starting control plane node docker-flags-20220604161559-5712 in cluster docker-flags-20220604161559-5712
	I0604 16:16:06.766477    5552 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:16:06.768473    5552 out.go:177] * Pulling base image ...
	I0604 16:16:06.777502    5552 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:16:06.778535    5552 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:16:06.778535    5552 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:16:06.778535    5552 cache.go:57] Caching tarball of preloaded images
	I0604 16:16:06.778535    5552 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:16:06.779219    5552 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:16:06.779219    5552 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\docker-flags-20220604161559-5712\config.json ...
	I0604 16:16:06.779219    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\docker-flags-20220604161559-5712\config.json: {Name:mk82ed0ff656de387520ff3cd0acc43146705ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:16:07.898245    5552 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:16:07.898245    5552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:16:07.898245    5552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:16:07.898245    5552 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:16:07.898245    5552 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:16:07.898245    5552 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:16:07.898245    5552 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:16:07.898245    5552 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:16:07.898245    5552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:16:10.324085    5552 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-1907385766: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-1907385766: read-only file system"}
	I0604 16:16:10.324085    5552 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:16:10.324158    5552 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:16:10.324273    5552 start.go:352] acquiring machines lock for docker-flags-20220604161559-5712: {Name:mka985cd5e441a323087a4fc6a273697e3bcf64f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:16:10.324568    5552 start.go:356] acquired machines lock for "docker-flags-20220604161559-5712" in 219.7µs
	I0604 16:16:10.324803    5552 start.go:91] Provisioning new machine with config: &{Name:docker-flags-20220604161559-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:docker-fla
gs-20220604161559-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:16:10.325032    5552 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:16:10.325243    5552 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:16:10.325243    5552 start.go:165] libmachine.API.Create for "docker-flags-20220604161559-5712" (driver="docker")
	I0604 16:16:10.325243    5552 client.go:168] LocalClient.Create starting
	I0604 16:16:10.325243    5552 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:16:10.325243    5552 main.go:134] libmachine: Decoding PEM data...
	I0604 16:16:10.325243    5552 main.go:134] libmachine: Parsing certificate...
	I0604 16:16:10.325243    5552 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:16:10.325243    5552 main.go:134] libmachine: Decoding PEM data...
	I0604 16:16:10.325243    5552 main.go:134] libmachine: Parsing certificate...
	I0604 16:16:10.346579    5552 cli_runner.go:164] Run: docker network inspect docker-flags-20220604161559-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:16:11.450778    5552 cli_runner.go:211] docker network inspect docker-flags-20220604161559-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:16:11.450778    5552 cli_runner.go:217] Completed: docker network inspect docker-flags-20220604161559-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1041874s)
	I0604 16:16:11.458964    5552 network_create.go:272] running [docker network inspect docker-flags-20220604161559-5712] to gather additional debugging logs...
	I0604 16:16:11.458964    5552 cli_runner.go:164] Run: docker network inspect docker-flags-20220604161559-5712
	W0604 16:16:12.527879    5552 cli_runner.go:211] docker network inspect docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:16:12.527957    5552 cli_runner.go:217] Completed: docker network inspect docker-flags-20220604161559-5712: (1.0684273s)
	I0604 16:16:12.527957    5552 network_create.go:275] error running [docker network inspect docker-flags-20220604161559-5712]: docker network inspect docker-flags-20220604161559-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220604161559-5712
	I0604 16:16:12.527957    5552 network_create.go:277] output of [docker network inspect docker-flags-20220604161559-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220604161559-5712
	
	** /stderr **
	I0604 16:16:12.534800    5552 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:16:13.640734    5552 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1059216s)
	I0604 16:16:13.670497    5552 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005322a0] misses:0}
	I0604 16:16:13.670552    5552 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:16:13.670552    5552 network_create.go:115] attempt to create docker network docker-flags-20220604161559-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:16:13.678450    5552 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220604161559-5712
	W0604 16:16:14.830014    5552 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:16:15.218527    5552 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220604161559-5712: (1.1515513s)
	E0604 16:16:15.218760    5552 network_create.go:104] error while trying to create docker network docker-flags-20220604161559-5712 192.168.49.0/24: create docker network docker-flags-20220604161559-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 99f6613bc3b33fdeffb8e38b7f3134f902f3d1fa7307d1b7d44a65afb6e58b25 (br-99f6613bc3b3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:16:15.218808    5552 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220604161559-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 99f6613bc3b33fdeffb8e38b7f3134f902f3d1fa7307d1b7d44a65afb6e58b25 (br-99f6613bc3b3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220604161559-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 99f6613bc3b33fdeffb8e38b7f3134f902f3d1fa7307d1b7d44a65afb6e58b25 (br-99f6613bc3b3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:16:15.241815    5552 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:16:16.360940    5552 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1191133s)
	I0604 16:16:16.367921    5552 cli_runner.go:164] Run: docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:16:17.437472    5552 cli_runner.go:211] docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:16:17.437653    5552 cli_runner.go:217] Completed: docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0695396s)
	I0604 16:16:17.437653    5552 client.go:171] LocalClient.Create took 7.1123328s
	I0604 16:16:19.461619    5552 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:16:19.467672    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712
	W0604 16:16:20.525145    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:16:20.525145    5552 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: (1.0574623s)
	I0604 16:16:20.526027    5552 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:20.811581    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712
	W0604 16:16:21.848368    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:16:21.848368    5552 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: (1.036593s)
	W0604 16:16:21.848746    5552 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	
	W0604 16:16:21.848746    5552 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:21.857865    5552 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:16:21.865989    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712
	W0604 16:16:22.888539    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:16:22.888539    5552 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: (1.0225394s)
	I0604 16:16:22.888539    5552 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:23.190135    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712
	W0604 16:16:24.293327    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:16:24.293327    5552 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: (1.1031797s)
	W0604 16:16:24.293327    5552 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	
	W0604 16:16:24.293327    5552 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:24.293327    5552 start.go:134] duration metric: createHost completed in 13.9679327s
	I0604 16:16:24.293327    5552 start.go:81] releasing machines lock for "docker-flags-20220604161559-5712", held for 13.9684835s
	W0604 16:16:24.293327    5552 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for docker-flags-20220604161559-5712 container: docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220604161559-5712': mkdir /var/lib/docker/volumes/docker-flags-20220604161559-5712: read-only file system
	I0604 16:16:24.307309    5552 cli_runner.go:164] Run: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}
	W0604 16:16:25.387285    5552 cli_runner.go:211] docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:16:25.387320    5552 cli_runner.go:217] Completed: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: (1.0797024s)
	I0604 16:16:25.387402    5552 delete.go:82] Unable to get host status for docker-flags-20220604161559-5712, assuming it has already been deleted: state: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	W0604 16:16:25.387661    5552 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for docker-flags-20220604161559-5712 container: docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220604161559-5712': mkdir /var/lib/docker/volumes/docker-flags-20220604161559-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for docker-flags-20220604161559-5712 container: docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220604161559-5712': mkdir /var/lib/docker/volumes/docker-flags-20220604161559-5712: read-only file system
	
	I0604 16:16:25.387661    5552 start.go:614] Will try again in 5 seconds ...
	I0604 16:16:30.397173    5552 start.go:352] acquiring machines lock for docker-flags-20220604161559-5712: {Name:mka985cd5e441a323087a4fc6a273697e3bcf64f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:16:30.397614    5552 start.go:356] acquired machines lock for "docker-flags-20220604161559-5712" in 441.1µs
	I0604 16:16:30.397754    5552 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:16:30.397754    5552 fix.go:55] fixHost starting: 
	I0604 16:16:30.412524    5552 cli_runner.go:164] Run: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}
	W0604 16:16:31.497706    5552 cli_runner.go:211] docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:16:31.497777    5552 cli_runner.go:217] Completed: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: (1.0850026s)
	I0604 16:16:31.497777    5552 fix.go:103] recreateIfNeeded on docker-flags-20220604161559-5712: state= err=unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:31.497777    5552 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:16:31.501559    5552 out.go:177] * docker "docker-flags-20220604161559-5712" container is missing, will recreate.
	I0604 16:16:31.504583    5552 delete.go:124] DEMOLISHING docker-flags-20220604161559-5712 ...
	I0604 16:16:31.523759    5552 cli_runner.go:164] Run: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}
	W0604 16:16:32.637259    5552 cli_runner.go:211] docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:16:32.637455    5552 cli_runner.go:217] Completed: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: (1.1134882s)
	W0604 16:16:32.637655    5552 stop.go:75] unable to get state: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:32.637695    5552 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:32.653301    5552 cli_runner.go:164] Run: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}
	W0604 16:16:33.723111    5552 cli_runner.go:211] docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:16:33.723111    5552 cli_runner.go:217] Completed: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: (1.0697984s)
	I0604 16:16:33.723111    5552 delete.go:82] Unable to get host status for docker-flags-20220604161559-5712, assuming it has already been deleted: state: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:33.730102    5552 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-20220604161559-5712
	W0604 16:16:34.836284    5552 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:16:34.836342    5552 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} docker-flags-20220604161559-5712: (1.1060648s)
	I0604 16:16:34.836342    5552 kic.go:356] could not find the container docker-flags-20220604161559-5712 to remove it. will try anyways
	I0604 16:16:34.842984    5552 cli_runner.go:164] Run: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}
	W0604 16:16:35.952031    5552 cli_runner.go:211] docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:16:35.952031    5552 cli_runner.go:217] Completed: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: (1.1090356s)
	W0604 16:16:35.952031    5552 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:35.960112    5552 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-20220604161559-5712 /bin/bash -c "sudo init 0"
	W0604 16:16:36.992834    5552 cli_runner.go:211] docker exec --privileged -t docker-flags-20220604161559-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:16:36.992834    5552 cli_runner.go:217] Completed: docker exec --privileged -t docker-flags-20220604161559-5712 /bin/bash -c "sudo init 0": (1.032711s)
	I0604 16:16:36.992834    5552 oci.go:625] error shutdown docker-flags-20220604161559-5712: docker exec --privileged -t docker-flags-20220604161559-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:38.013112    5552 cli_runner.go:164] Run: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}
	W0604 16:16:39.080890    5552 cli_runner.go:211] docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:16:39.080890    5552 cli_runner.go:217] Completed: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: (1.0677667s)
	I0604 16:16:39.080890    5552 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:39.080890    5552 oci.go:639] temporary error: container docker-flags-20220604161559-5712 status is  but expect it to be exited
	I0604 16:16:39.080890    5552 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:39.569838    5552 cli_runner.go:164] Run: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}
	W0604 16:16:40.642639    5552 cli_runner.go:211] docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:16:40.642639    5552 cli_runner.go:217] Completed: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: (1.072789s)
	I0604 16:16:40.642639    5552 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:40.642639    5552 oci.go:639] temporary error: container docker-flags-20220604161559-5712 status is  but expect it to be exited
	I0604 16:16:40.642639    5552 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:41.545214    5552 cli_runner.go:164] Run: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}
	W0604 16:16:42.646878    5552 cli_runner.go:211] docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:16:42.646878    5552 cli_runner.go:217] Completed: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: (1.1016516s)
	I0604 16:16:42.646878    5552 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:42.646878    5552 oci.go:639] temporary error: container docker-flags-20220604161559-5712 status is  but expect it to be exited
	I0604 16:16:42.646878    5552 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:43.310787    5552 cli_runner.go:164] Run: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}
	W0604 16:16:44.433445    5552 cli_runner.go:211] docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:16:44.433445    5552 cli_runner.go:217] Completed: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: (1.1226463s)
	I0604 16:16:44.433445    5552 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:44.433445    5552 oci.go:639] temporary error: container docker-flags-20220604161559-5712 status is  but expect it to be exited
	I0604 16:16:44.433445    5552 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:45.553147    5552 cli_runner.go:164] Run: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}
	W0604 16:16:46.709575    5552 cli_runner.go:211] docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:16:46.710087    5552 cli_runner.go:217] Completed: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: (1.1564147s)
	I0604 16:16:46.710087    5552 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:46.710087    5552 oci.go:639] temporary error: container docker-flags-20220604161559-5712 status is  but expect it to be exited
	I0604 16:16:46.710087    5552 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:48.239811    5552 cli_runner.go:164] Run: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}
	W0604 16:16:49.313721    5552 cli_runner.go:211] docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:16:49.313721    5552 cli_runner.go:217] Completed: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: (1.0738981s)
	I0604 16:16:49.313721    5552 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:49.313721    5552 oci.go:639] temporary error: container docker-flags-20220604161559-5712 status is  but expect it to be exited
	I0604 16:16:49.313721    5552 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:52.370405    5552 cli_runner.go:164] Run: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}
	W0604 16:16:53.461126    5552 cli_runner.go:211] docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:16:53.461126    5552 cli_runner.go:217] Completed: docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: (1.0907087s)
	I0604 16:16:53.461126    5552 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:16:53.461126    5552 oci.go:639] temporary error: container docker-flags-20220604161559-5712 status is  but expect it to be exited
	I0604 16:16:53.461126    5552 oci.go:88] couldn't shut down docker-flags-20220604161559-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	 
	I0604 16:16:53.470178    5552 cli_runner.go:164] Run: docker rm -f -v docker-flags-20220604161559-5712
	I0604 16:16:54.581475    5552 cli_runner.go:217] Completed: docker rm -f -v docker-flags-20220604161559-5712: (1.1111524s)
	I0604 16:16:54.594897    5552 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-20220604161559-5712
	W0604 16:16:55.778927    5552 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:16:55.779015    5552 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} docker-flags-20220604161559-5712: (1.1838222s)
	I0604 16:16:55.787360    5552 cli_runner.go:164] Run: docker network inspect docker-flags-20220604161559-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:16:56.954832    5552 cli_runner.go:211] docker network inspect docker-flags-20220604161559-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:16:56.954832    5552 cli_runner.go:217] Completed: docker network inspect docker-flags-20220604161559-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1674588s)
	I0604 16:16:56.963197    5552 network_create.go:272] running [docker network inspect docker-flags-20220604161559-5712] to gather additional debugging logs...
	I0604 16:16:56.963197    5552 cli_runner.go:164] Run: docker network inspect docker-flags-20220604161559-5712
	W0604 16:16:58.099241    5552 cli_runner.go:211] docker network inspect docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:16:58.099241    5552 cli_runner.go:217] Completed: docker network inspect docker-flags-20220604161559-5712: (1.1360311s)
	I0604 16:16:58.099241    5552 network_create.go:275] error running [docker network inspect docker-flags-20220604161559-5712]: docker network inspect docker-flags-20220604161559-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220604161559-5712
	I0604 16:16:58.099241    5552 network_create.go:277] output of [docker network inspect docker-flags-20220604161559-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220604161559-5712
	
	** /stderr **
	W0604 16:16:58.100238    5552 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:16:58.100238    5552 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:16:59.104847    5552 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:16:59.108843    5552 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:16:59.109526    5552 start.go:165] libmachine.API.Create for "docker-flags-20220604161559-5712" (driver="docker")
	I0604 16:16:59.109526    5552 client.go:168] LocalClient.Create starting
	I0604 16:16:59.109526    5552 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:16:59.110165    5552 main.go:134] libmachine: Decoding PEM data...
	I0604 16:16:59.110223    5552 main.go:134] libmachine: Parsing certificate...
	I0604 16:16:59.110223    5552 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:16:59.110223    5552 main.go:134] libmachine: Decoding PEM data...
	I0604 16:16:59.110223    5552 main.go:134] libmachine: Parsing certificate...
	I0604 16:16:59.118728    5552 cli_runner.go:164] Run: docker network inspect docker-flags-20220604161559-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:17:00.342785    5552 cli_runner.go:211] docker network inspect docker-flags-20220604161559-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:17:00.343031    5552 cli_runner.go:217] Completed: docker network inspect docker-flags-20220604161559-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2240438s)
	I0604 16:17:00.353823    5552 network_create.go:272] running [docker network inspect docker-flags-20220604161559-5712] to gather additional debugging logs...
	I0604 16:17:00.353823    5552 cli_runner.go:164] Run: docker network inspect docker-flags-20220604161559-5712
	W0604 16:17:01.455032    5552 cli_runner.go:211] docker network inspect docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:17:01.455032    5552 cli_runner.go:217] Completed: docker network inspect docker-flags-20220604161559-5712: (1.1009637s)
	I0604 16:17:01.455032    5552 network_create.go:275] error running [docker network inspect docker-flags-20220604161559-5712]: docker network inspect docker-flags-20220604161559-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220604161559-5712
	I0604 16:17:01.455032    5552 network_create.go:277] output of [docker network inspect docker-flags-20220604161559-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220604161559-5712
	
	** /stderr **
	I0604 16:17:01.462122    5552 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:17:02.677292    5552 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2149109s)
	I0604 16:17:02.694179    5552 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005322a0] amended:false}} dirty:map[] misses:0}
	I0604 16:17:02.694179    5552 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:17:02.709278    5552 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005322a0] amended:true}} dirty:map[192.168.49.0:0xc0005322a0 192.168.58.0:0xc000722240] misses:0}
	I0604 16:17:02.709278    5552 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:17:02.710039    5552 network_create.go:115] attempt to create docker network docker-flags-20220604161559-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:17:02.717272    5552 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220604161559-5712
	W0604 16:17:03.818461    5552 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:17:03.818461    5552 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220604161559-5712: (1.1011777s)
	E0604 16:17:03.818461    5552 network_create.go:104] error while trying to create docker network docker-flags-20220604161559-5712 192.168.58.0/24: create docker network docker-flags-20220604161559-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network eb0f4f8fa6063d57a59191dd44dce6eb4b039d5c4efe30d0f67ad57dde0b9054 (br-eb0f4f8fa606): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:17:03.818461    5552 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220604161559-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network eb0f4f8fa6063d57a59191dd44dce6eb4b039d5c4efe30d0f67ad57dde0b9054 (br-eb0f4f8fa606): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220604161559-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network eb0f4f8fa6063d57a59191dd44dce6eb4b039d5c4efe30d0f67ad57dde0b9054 (br-eb0f4f8fa606): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:17:03.832463    5552 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:17:04.925264    5552 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0927897s)
	I0604 16:17:04.931247    5552 cli_runner.go:164] Run: docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:17:06.018996    5552 cli_runner.go:211] docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:17:06.018996    5552 cli_runner.go:217] Completed: docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0877381s)
	I0604 16:17:06.018996    5552 client.go:171] LocalClient.Create took 6.9093953s
	I0604 16:17:08.039554    5552 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:17:08.046243    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712
	W0604 16:17:09.115920    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:17:09.116875    5552 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: (1.0696652s)
	I0604 16:17:09.116875    5552 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:17:09.469772    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712
	W0604 16:17:10.505201    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:17:10.505201    5552 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: (1.0354182s)
	W0604 16:17:10.505201    5552 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	
	W0604 16:17:10.505201    5552 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:17:10.516063    5552 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:17:10.525671    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712
	W0604 16:17:11.549074    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:17:11.549223    5552 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: (1.0233655s)
	I0604 16:17:11.549223    5552 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:17:11.779708    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712
	W0604 16:17:12.839504    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:17:12.839504    5552 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: (1.0597853s)
	W0604 16:17:12.839504    5552 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	
	W0604 16:17:12.839504    5552 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:17:12.839504    5552 start.go:134] duration metric: createHost completed in 13.7344304s
	I0604 16:17:12.848474    5552 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:17:12.856611    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712
	W0604 16:17:13.931304    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:17:13.931502    5552 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: (1.0746812s)
	I0604 16:17:13.931730    5552 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:17:14.193408    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712
	W0604 16:17:15.255820    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:17:15.255820    5552 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: (1.062207s)
	W0604 16:17:15.256035    5552 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	
	W0604 16:17:15.256035    5552 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:17:15.267482    5552 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:17:15.275274    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712
	W0604 16:17:16.345175    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:17:16.345333    5552 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: (1.0698033s)
	I0604 16:17:16.345333    5552 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:17:16.558916    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712
	W0604 16:17:17.574262    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712 returned with exit code 1
	I0604 16:17:17.574262    5552 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: (1.0153351s)
	W0604 16:17:17.574262    5552 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	
	W0604 16:17:17.574262    5552 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220604161559-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220604161559-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	I0604 16:17:17.574262    5552 fix.go:57] fixHost completed within 47.1759962s
	I0604 16:17:17.574262    5552 start.go:81] releasing machines lock for "docker-flags-20220604161559-5712", held for 47.1761361s
	W0604 16:17:17.574262    5552 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-20220604161559-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220604161559-5712 container: docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220604161559-5712': mkdir /var/lib/docker/volumes/docker-flags-20220604161559-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p docker-flags-20220604161559-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220604161559-5712 container: docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220604161559-5712': mkdir /var/lib/docker/volumes/docker-flags-20220604161559-5712: read-only file system
	
	I0604 16:17:17.607034    5552 out.go:177] 
	W0604 16:17:17.609804    5552 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220604161559-5712 container: docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220604161559-5712': mkdir /var/lib/docker/volumes/docker-flags-20220604161559-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220604161559-5712 container: docker volume create docker-flags-20220604161559-5712 --label name.minikube.sigs.k8s.io=docker-flags-20220604161559-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220604161559-5712': mkdir /var/lib/docker/volumes/docker-flags-20220604161559-5712: read-only file system
	
	W0604 16:17:17.609804    5552 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:17:17.609804    5552 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:17:17.615023    5552 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:47: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p docker-flags-20220604161559-5712 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker" : exit status 60
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220604161559-5712 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20220604161559-5712 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (3.1438124s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_d4f85ee29175a4f8b67ccfa3331e6e8264cb6e77_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:52: failed to 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20220604161559-5712 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:57: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:57: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220604161559-5712 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20220604161559-5712 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (3.2170686s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_e7205990054f4366ee7f5bb530c13b1f3df973dc_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:63: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20220604161559-5712 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:67: expected "out/minikube-windows-amd64.exe -p docker-flags-20220604161559-5712 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:482: *** TestDockerFlags FAILED at 2022-06-04 16:17:24.0995238 +0000 GMT m=+3449.290674501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-20220604161559-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect docker-flags-20220604161559-5712: exit status 1 (1.1607756s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: docker-flags-20220604161559-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20220604161559-5712 -n docker-flags-20220604161559-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20220604161559-5712 -n docker-flags-20220604161559-5712: exit status 7 (2.8619485s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:17:28.103132    6472 status.go:247] status error: host: state: unknown state "docker-flags-20220604161559-5712": docker container inspect docker-flags-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220604161559-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-20220604161559-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-20220604161559-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20220604161559-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20220604161559-5712: (8.4051101s)
--- FAIL: TestDockerFlags (97.09s)

                                                
                                    
x
+
TestForceSystemdFlag (93.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20220604161219-5712 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-20220604161219-5712 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: exit status 60 (1m16.336697s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-20220604161219-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node force-systemd-flag-20220604161219-5712 in cluster force-systemd-flag-20220604161219-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-20220604161219-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:12:19.951469    8508 out.go:296] Setting OutFile to fd 828 ...
	I0604 16:12:20.006948    8508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:12:20.006948    8508 out.go:309] Setting ErrFile to fd 1556...
	I0604 16:12:20.006948    8508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:12:20.019237    8508 out.go:303] Setting JSON to false
	I0604 16:12:20.022323    8508 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10212,"bootTime":1654348928,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:12:20.022323    8508 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:12:20.027322    8508 out.go:177] * [force-systemd-flag-20220604161219-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:12:20.033515    8508 notify.go:193] Checking for updates...
	I0604 16:12:20.035926    8508 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:12:20.033824    8508 preload.go:306] deleting older generation preload C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
	I0604 16:12:20.039977    8508 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:12:20.042859    8508 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:12:20.045168    8508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:12:20.048777    8508 config.go:178] Loaded profile config "NoKubernetes-20220604161047-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0604 16:12:20.048777    8508 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:12:20.049784    8508 config.go:178] Loaded profile config "running-upgrade-20220604161047-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0604 16:12:20.049784    8508 config.go:178] Loaded profile config "stopped-upgrade-20220604161047-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0604 16:12:20.049784    8508 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:12:20.094810    8508 preload.go:306] deleting older generation preload C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.checksum
	I0604 16:12:22.644729    8508 docker.go:137] docker version: linux-20.10.16
	I0604 16:12:22.651764    8508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:12:24.760480    8508 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1086932s)
	I0604 16:12:24.761528    8508 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:12:23.6788564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:12:24.765927    8508 out.go:177] * Using the docker driver based on user configuration
	I0604 16:12:24.768983    8508 start.go:284] selected driver: docker
	I0604 16:12:24.768983    8508 start.go:806] validating driver "docker" against <nil>
	I0604 16:12:24.768983    8508 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:12:24.846441    8508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:12:26.926490    8508 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0800263s)
	I0604 16:12:26.926490    8508 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:12:25.904665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:12:26.927009    8508 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:12:26.927281    8508 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0604 16:12:26.930413    8508 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:12:26.932348    8508 cni.go:95] Creating CNI manager for ""
	I0604 16:12:26.932348    8508 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:12:26.932348    8508 start_flags.go:306] config:
	{Name:force-systemd-flag-20220604161219-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-flag-20220604161219-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:12:26.935567    8508 out.go:177] * Starting control plane node force-systemd-flag-20220604161219-5712 in cluster force-systemd-flag-20220604161219-5712
	I0604 16:12:26.937395    8508 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:12:26.939987    8508 out.go:177] * Pulling base image ...
	I0604 16:12:26.943799    8508 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:12:26.944829    8508 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:12:26.944829    8508 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:12:26.944829    8508 cache.go:57] Caching tarball of preloaded images
	I0604 16:12:26.945356    8508 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:12:26.945437    8508 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:12:26.945437    8508 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-flag-20220604161219-5712\config.json ...
	I0604 16:12:26.945437    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-flag-20220604161219-5712\config.json: {Name:mk4643a7452853635cf5310de435115f5da399bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:12:28.029866    8508 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:12:28.029917    8508 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:12:28.029917    8508 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:12:28.029917    8508 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:12:28.029917    8508 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:12:28.029917    8508 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:12:28.030611    8508 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:12:28.030654    8508 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:12:28.030698    8508 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:12:30.361414    8508 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-399635251: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-399635251: read-only file system"}
	I0604 16:12:30.361414    8508 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:12:30.361954    8508 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:12:30.362090    8508 start.go:352] acquiring machines lock for force-systemd-flag-20220604161219-5712: {Name:mk6cca08b55c261bb6b6dd8c903d228cd83504cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:12:30.362276    8508 start.go:356] acquired machines lock for "force-systemd-flag-20220604161219-5712" in 144.8µs
	I0604 16:12:30.362517    8508 start.go:91] Provisioning new machine with config: &{Name:force-systemd-flag-20220604161219-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-flag-20220604161219
-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:12:30.362667    8508 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:12:30.383145    8508 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:12:30.384124    8508 start.go:165] libmachine.API.Create for "force-systemd-flag-20220604161219-5712" (driver="docker")
	I0604 16:12:30.384124    8508 client.go:168] LocalClient.Create starting
	I0604 16:12:30.384504    8508 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:12:30.384504    8508 main.go:134] libmachine: Decoding PEM data...
	I0604 16:12:30.384504    8508 main.go:134] libmachine: Parsing certificate...
	I0604 16:12:30.385229    8508 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:12:30.385415    8508 main.go:134] libmachine: Decoding PEM data...
	I0604 16:12:30.385415    8508 main.go:134] libmachine: Parsing certificate...
	I0604 16:12:30.394169    8508 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220604161219-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:12:31.436954    8508 cli_runner.go:211] docker network inspect force-systemd-flag-20220604161219-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:12:31.436954    8508 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220604161219-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0427742s)
	I0604 16:12:31.444955    8508 network_create.go:272] running [docker network inspect force-systemd-flag-20220604161219-5712] to gather additional debugging logs...
	I0604 16:12:31.444955    8508 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220604161219-5712
	W0604 16:12:32.499953    8508 cli_runner.go:211] docker network inspect force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:12:32.499953    8508 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220604161219-5712: (1.0549868s)
	I0604 16:12:32.499953    8508 network_create.go:275] error running [docker network inspect force-systemd-flag-20220604161219-5712]: docker network inspect force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220604161219-5712
	I0604 16:12:32.499953    8508 network_create.go:277] output of [docker network inspect force-systemd-flag-20220604161219-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220604161219-5712
	
	** /stderr **
	I0604 16:12:32.512385    8508 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:12:33.569484    8508 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0569211s)
	I0604 16:12:33.590222    8508 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007c6348] misses:0}
	I0604 16:12:33.590222    8508 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:12:33.590222    8508 network_create.go:115] attempt to create docker network force-systemd-flag-20220604161219-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:12:33.597391    8508 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220604161219-5712
	W0604 16:12:34.686698    8508 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:12:34.686698    8508 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220604161219-5712: (1.0883307s)
	E0604 16:12:34.686698    8508 network_create.go:104] error while trying to create docker network force-systemd-flag-20220604161219-5712 192.168.49.0/24: create docker network force-systemd-flag-20220604161219-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a8a010c1418d3cd928b171843fe2f5864e2d5365890727b9a7825e210220a649 (br-a8a010c1418d): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:12:34.686698    8508 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220604161219-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a8a010c1418d3cd928b171843fe2f5864e2d5365890727b9a7825e210220a649 (br-a8a010c1418d): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220604161219-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a8a010c1418d3cd928b171843fe2f5864e2d5365890727b9a7825e210220a649 (br-a8a010c1418d): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:12:34.700714    8508 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:12:35.735875    8508 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0350126s)
	I0604 16:12:35.743746    8508 cli_runner.go:164] Run: docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:12:36.781609    8508 cli_runner.go:211] docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:12:36.781702    8508 cli_runner.go:217] Completed: docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0378521s)
	I0604 16:12:36.781787    8508 client.go:171] LocalClient.Create took 6.3975421s
	I0604 16:12:38.796023    8508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:12:38.804002    8508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712
	W0604 16:12:39.901573    8508 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:12:39.901573    8508 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: (1.09744s)
	I0604 16:12:39.901573    8508 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:40.191509    8508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712
	W0604 16:12:41.213722    8508 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:12:41.213722    8508 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: (1.0214965s)
	W0604 16:12:41.214117    8508 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	
	W0604 16:12:41.214201    8508 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:41.226546    8508 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:12:41.232808    8508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712
	W0604 16:12:42.301283    8508 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:12:42.301619    8508 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: (1.0684652s)
	I0604 16:12:42.301721    8508 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:42.608107    8508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712
	W0604 16:12:43.629291    8508 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:12:43.629323    8508 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: (1.0209955s)
	W0604 16:12:43.629626    8508 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	
	W0604 16:12:43.629674    8508 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:43.629729    8508 start.go:134] duration metric: createHost completed in 13.2669252s
	I0604 16:12:43.629729    8508 start.go:81] releasing machines lock for "force-systemd-flag-20220604161219-5712", held for 13.2672591s
	W0604 16:12:43.629953    8508 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220604161219-5712 container: docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220604161219-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220604161219-5712': mkdir /var/lib/docker/volumes/force-systemd-flag-20220604161219-5712: read-only file system
	I0604 16:12:43.643397    8508 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}
	W0604 16:12:44.669973    8508 cli_runner.go:211] docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:12:44.670042    8508 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: (1.0264104s)
	I0604 16:12:44.670149    8508 delete.go:82] Unable to get host status for force-systemd-flag-20220604161219-5712, assuming it has already been deleted: state: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	W0604 16:12:44.670480    8508 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220604161219-5712 container: docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220604161219-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220604161219-5712': mkdir /var/lib/docker/volumes/force-systemd-flag-20220604161219-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220604161219-5712 container: docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220604161219-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220604161219-5712': mkdir /var/lib/docker/volumes/force-systemd-flag-20220604161219-5712: read-only file system
	
	I0604 16:12:44.670480    8508 start.go:614] Will try again in 5 seconds ...
	I0604 16:12:49.679208    8508 start.go:352] acquiring machines lock for force-systemd-flag-20220604161219-5712: {Name:mk6cca08b55c261bb6b6dd8c903d228cd83504cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:12:49.679208    8508 start.go:356] acquired machines lock for "force-systemd-flag-20220604161219-5712" in 0s
	I0604 16:12:49.679208    8508 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:12:49.679208    8508 fix.go:55] fixHost starting: 
	I0604 16:12:49.695217    8508 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}
	W0604 16:12:50.725702    8508 cli_runner.go:211] docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:12:50.725702    8508 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: (1.0304759s)
	I0604 16:12:50.725702    8508 fix.go:103] recreateIfNeeded on force-systemd-flag-20220604161219-5712: state= err=unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:50.725702    8508 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:12:50.728748    8508 out.go:177] * docker "force-systemd-flag-20220604161219-5712" container is missing, will recreate.
	I0604 16:12:50.732750    8508 delete.go:124] DEMOLISHING force-systemd-flag-20220604161219-5712 ...
	I0604 16:12:50.745695    8508 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}
	W0604 16:12:51.823107    8508 cli_runner.go:211] docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:12:51.823107    8508 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: (1.0774022s)
	W0604 16:12:51.823107    8508 stop.go:75] unable to get state: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:51.823107    8508 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:51.838051    8508 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}
	W0604 16:12:52.954377    8508 cli_runner.go:211] docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:12:52.954377    8508 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: (1.1151181s)
	I0604 16:12:52.954377    8508 delete.go:82] Unable to get host status for force-systemd-flag-20220604161219-5712, assuming it has already been deleted: state: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:52.954377    8508 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-20220604161219-5712
	W0604 16:12:54.065108    8508 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:12:54.065108    8508 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} force-systemd-flag-20220604161219-5712: (1.110535s)
	I0604 16:12:54.065210    8508 kic.go:356] could not find the container force-systemd-flag-20220604161219-5712 to remove it. will try anyways
	I0604 16:12:54.073603    8508 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}
	W0604 16:12:55.135781    8508 cli_runner.go:211] docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:12:55.135781    8508 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: (1.0621686s)
	W0604 16:12:55.135781    8508 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:55.144038    8508 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-20220604161219-5712 /bin/bash -c "sudo init 0"
	W0604 16:12:56.225968    8508 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-20220604161219-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:12:56.225968    8508 cli_runner.go:217] Completed: docker exec --privileged -t force-systemd-flag-20220604161219-5712 /bin/bash -c "sudo init 0": (1.0819202s)
	I0604 16:12:56.225968    8508 oci.go:625] error shutdown force-systemd-flag-20220604161219-5712: docker exec --privileged -t force-systemd-flag-20220604161219-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:57.244817    8508 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}
	W0604 16:12:58.280959    8508 cli_runner.go:211] docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:12:58.280959    8508 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: (1.0361332s)
	I0604 16:12:58.280959    8508 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:58.280959    8508 oci.go:639] temporary error: container force-systemd-flag-20220604161219-5712 status is  but expect it to be exited
	I0604 16:12:58.280959    8508 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:58.750717    8508 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}
	W0604 16:12:59.814991    8508 cli_runner.go:211] docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:12:59.814991    8508 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: (1.0642651s)
	I0604 16:12:59.814991    8508 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:12:59.814991    8508 oci.go:639] temporary error: container force-systemd-flag-20220604161219-5712 status is  but expect it to be exited
	I0604 16:12:59.814991    8508 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:00.728320    8508 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}
	W0604 16:13:01.776053    8508 cli_runner.go:211] docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:13:01.776053    8508 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: (1.0475584s)
	I0604 16:13:01.776053    8508 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:01.776053    8508 oci.go:639] temporary error: container force-systemd-flag-20220604161219-5712 status is  but expect it to be exited
	I0604 16:13:01.776053    8508 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:02.434126    8508 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}
	W0604 16:13:03.479117    8508 cli_runner.go:211] docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:13:03.479117    8508 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: (1.0444499s)
	I0604 16:13:03.479117    8508 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:03.479117    8508 oci.go:639] temporary error: container force-systemd-flag-20220604161219-5712 status is  but expect it to be exited
	I0604 16:13:03.479117    8508 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:04.608827    8508 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}
	W0604 16:13:05.687297    8508 cli_runner.go:211] docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:13:05.687297    8508 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: (1.0779264s)
	I0604 16:13:05.687297    8508 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:05.687297    8508 oci.go:639] temporary error: container force-systemd-flag-20220604161219-5712 status is  but expect it to be exited
	I0604 16:13:05.687297    8508 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:07.208400    8508 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}
	W0604 16:13:08.282977    8508 cli_runner.go:211] docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:13:08.282977    8508 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: (1.074568s)
	I0604 16:13:08.282977    8508 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:08.282977    8508 oci.go:639] temporary error: container force-systemd-flag-20220604161219-5712 status is  but expect it to be exited
	I0604 16:13:08.282977    8508 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:11.332190    8508 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}
	W0604 16:13:12.377889    8508 cli_runner.go:211] docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:13:12.377889    8508 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: (1.0456879s)
	I0604 16:13:12.377889    8508 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:12.377889    8508 oci.go:639] temporary error: container force-systemd-flag-20220604161219-5712 status is  but expect it to be exited
	I0604 16:13:12.377889    8508 oci.go:88] couldn't shut down force-systemd-flag-20220604161219-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	 
	I0604 16:13:12.384841    8508 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-20220604161219-5712
	I0604 16:13:13.410519    8508 cli_runner.go:217] Completed: docker rm -f -v force-systemd-flag-20220604161219-5712: (1.025666s)
	I0604 16:13:13.418518    8508 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-20220604161219-5712
	W0604 16:13:14.457383    8508 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:13:14.457383    8508 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} force-systemd-flag-20220604161219-5712: (1.0388538s)
	I0604 16:13:14.464943    8508 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220604161219-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:13:15.526723    8508 cli_runner.go:211] docker network inspect force-systemd-flag-20220604161219-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:13:15.526723    8508 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220604161219-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0616854s)
	I0604 16:13:15.534786    8508 network_create.go:272] running [docker network inspect force-systemd-flag-20220604161219-5712] to gather additional debugging logs...
	I0604 16:13:15.534786    8508 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220604161219-5712
	W0604 16:13:16.562630    8508 cli_runner.go:211] docker network inspect force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:13:16.562630    8508 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220604161219-5712: (1.0276118s)
	I0604 16:13:16.562733    8508 network_create.go:275] error running [docker network inspect force-systemd-flag-20220604161219-5712]: docker network inspect force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220604161219-5712
	I0604 16:13:16.562733    8508 network_create.go:277] output of [docker network inspect force-systemd-flag-20220604161219-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220604161219-5712
	
	** /stderr **
	W0604 16:13:16.563007    8508 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:13:16.563007    8508 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:13:17.563323    8508 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:13:17.570894    8508 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:13:17.571043    8508 start.go:165] libmachine.API.Create for "force-systemd-flag-20220604161219-5712" (driver="docker")
	I0604 16:13:17.571146    8508 client.go:168] LocalClient.Create starting
	I0604 16:13:17.571764    8508 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:13:17.571985    8508 main.go:134] libmachine: Decoding PEM data...
	I0604 16:13:17.572044    8508 main.go:134] libmachine: Parsing certificate...
	I0604 16:13:17.572129    8508 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:13:17.572323    8508 main.go:134] libmachine: Decoding PEM data...
	I0604 16:13:17.572415    8508 main.go:134] libmachine: Parsing certificate...
	I0604 16:13:17.580526    8508 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220604161219-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:13:18.658019    8508 cli_runner.go:211] docker network inspect force-systemd-flag-20220604161219-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:13:18.658019    8508 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220604161219-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0774813s)
	I0604 16:13:18.665364    8508 network_create.go:272] running [docker network inspect force-systemd-flag-20220604161219-5712] to gather additional debugging logs...
	I0604 16:13:18.665364    8508 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220604161219-5712
	W0604 16:13:19.796582    8508 cli_runner.go:211] docker network inspect force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:13:19.796582    8508 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220604161219-5712: (1.1312053s)
	I0604 16:13:19.796700    8508 network_create.go:275] error running [docker network inspect force-systemd-flag-20220604161219-5712]: docker network inspect force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220604161219-5712
	I0604 16:13:19.796700    8508 network_create.go:277] output of [docker network inspect force-systemd-flag-20220604161219-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220604161219-5712
	
	** /stderr **
	I0604 16:13:19.803606    8508 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:13:20.896727    8508 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0930024s)
	I0604 16:13:20.914528    8508 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007c6348] amended:false}} dirty:map[] misses:0}
	I0604 16:13:20.914528    8508 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:13:20.932023    8508 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007c6348] amended:true}} dirty:map[192.168.49.0:0xc0007c6348 192.168.58.0:0xc0005aa168] misses:0}
	I0604 16:13:20.932111    8508 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:13:20.932111    8508 network_create.go:115] attempt to create docker network force-systemd-flag-20220604161219-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:13:20.939060    8508 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220604161219-5712
	W0604 16:13:22.040281    8508 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:13:22.040281    8508 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220604161219-5712: (1.1012088s)
	E0604 16:13:22.040281    8508 network_create.go:104] error while trying to create docker network force-systemd-flag-20220604161219-5712 192.168.58.0/24: create docker network force-systemd-flag-20220604161219-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 985f5e41bf6dfae330f416dabb4c52227eb3ed90dfeeb5626cef88256a823e6f (br-985f5e41bf6d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:13:22.040281    8508 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220604161219-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 985f5e41bf6dfae330f416dabb4c52227eb3ed90dfeeb5626cef88256a823e6f (br-985f5e41bf6d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220604161219-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 985f5e41bf6dfae330f416dabb4c52227eb3ed90dfeeb5626cef88256a823e6f (br-985f5e41bf6d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:13:22.057367    8508 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:13:23.142227    8508 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0845225s)
	I0604 16:13:23.149459    8508 cli_runner.go:164] Run: docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:13:24.195364    8508 cli_runner.go:211] docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:13:24.195409    8508 cli_runner.go:217] Completed: docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0457186s)
	I0604 16:13:24.195589    8508 client.go:171] LocalClient.Create took 6.6243699s
	I0604 16:13:26.211588    8508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:13:26.217766    8508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712
	W0604 16:13:27.284342    8508 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:13:27.284342    8508 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: (1.0655591s)
	I0604 16:13:27.284342    8508 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:27.631864    8508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712
	W0604 16:13:28.693763    8508 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:13:28.693848    8508 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: (1.0617821s)
	W0604 16:13:28.694148    8508 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	
	W0604 16:13:28.694253    8508 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:28.704605    8508 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:13:28.711489    8508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712
	W0604 16:13:29.781582    8508 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:13:29.781582    8508 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: (1.0700805s)
	I0604 16:13:29.781582    8508 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:30.010540    8508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712
	W0604 16:13:31.103324    8508 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:13:31.103324    8508 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: (1.092772s)
	W0604 16:13:31.103324    8508 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	
	W0604 16:13:31.103324    8508 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:31.103324    8508 start.go:134] duration metric: createHost completed in 13.5396115s
	I0604 16:13:31.114337    8508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:13:31.122335    8508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712
	W0604 16:13:32.180368    8508 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:13:32.180368    8508 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: (1.0580209s)
	I0604 16:13:32.180368    8508 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:32.441268    8508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712
	W0604 16:13:33.502769    8508 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:13:33.502769    8508 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: (1.0614887s)
	W0604 16:13:33.502769    8508 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	
	W0604 16:13:33.502769    8508 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:33.514452    8508 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:13:33.521150    8508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712
	W0604 16:13:34.569916    8508 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:13:34.569916    8508 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: (1.0487542s)
	I0604 16:13:34.569916    8508 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:34.783813    8508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712
	W0604 16:13:36.005586    8508 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712 returned with exit code 1
	I0604 16:13:36.005586    8508 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: (1.2217597s)
	W0604 16:13:36.005586    8508 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	
	W0604 16:13:36.005586    8508 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220604161219-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220604161219-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	I0604 16:13:36.005586    8508 fix.go:57] fixHost completed within 46.3259111s
	I0604 16:13:36.005586    8508 start.go:81] releasing machines lock for "force-systemd-flag-20220604161219-5712", held for 46.3259111s
	W0604 16:13:36.006575    8508 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-20220604161219-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220604161219-5712 container: docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220604161219-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220604161219-5712': mkdir /var/lib/docker/volumes/force-systemd-flag-20220604161219-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-20220604161219-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220604161219-5712 container: docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220604161219-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220604161219-5712': mkdir /var/lib/docker/volumes/force-systemd-flag-20220604161219-5712: read-only file system
	
	I0604 16:13:36.012197    8508 out.go:177] 
	W0604 16:13:36.014878    8508 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220604161219-5712 container: docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220604161219-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220604161219-5712': mkdir /var/lib/docker/volumes/force-systemd-flag-20220604161219-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220604161219-5712 container: docker volume create force-systemd-flag-20220604161219-5712 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220604161219-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220604161219-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220604161219-5712': mkdir /var/lib/docker/volumes/force-systemd-flag-20220604161219-5712: read-only file system
	
	W0604 16:13:36.015181    8508 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:13:36.015308    8508 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:13:36.018927    8508 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-20220604161219-5712 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker" : exit status 60
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20220604161219-5712 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-flag-20220604161219-5712 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (3.9751287s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2837ebd22544166cf14c5e2e977cc80019e59e54_2.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-flag-20220604161219-5712 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2022-06-04 16:13:40.1404899 +0000 GMT m=+3225.334075901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-20220604161219-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect force-systemd-flag-20220604161219-5712: exit status 1 (1.1314077s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: force-systemd-flag-20220604161219-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-20220604161219-5712 -n force-systemd-flag-20220604161219-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-20220604161219-5712 -n force-systemd-flag-20220604161219-5712: exit status 7 (2.9752566s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:13:44.223331    5160 status.go:247] status error: host: state: unknown state "force-systemd-flag-20220604161219-5712": docker container inspect force-systemd-flag-20220604161219-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220604161219-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-20220604161219-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-20220604161219-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220604161219-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220604161219-5712: (8.6678865s)
--- FAIL: TestForceSystemdFlag (93.19s)

                                                
                                    
x
+
TestForceSystemdEnv (92.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20220604161407-5712 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-env-20220604161407-5712 --memory=2048 --alsologtostderr -v=5 --driver=docker: exit status 60 (1m16.8449839s)

                                                
                                                
-- stdout --
	* [force-systemd-env-20220604161407-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node force-systemd-env-20220604161407-5712 in cluster force-systemd-env-20220604161407-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-20220604161407-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:14:08.066671    7592 out.go:296] Setting OutFile to fd 1412 ...
	I0604 16:14:08.123546    7592 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:14:08.123546    7592 out.go:309] Setting ErrFile to fd 1600...
	I0604 16:14:08.123546    7592 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:14:08.134090    7592 out.go:303] Setting JSON to false
	I0604 16:14:08.136705    7592 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10320,"bootTime":1654348928,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:14:08.136705    7592 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:14:08.392898    7592 out.go:177] * [force-systemd-env-20220604161407-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:14:08.396570    7592 notify.go:193] Checking for updates...
	I0604 16:14:08.399082    7592 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:14:08.397244    7592 preload.go:306] deleting older generation preload C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
	I0604 16:14:08.403292    7592 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:14:08.407229    7592 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:14:08.408957    7592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:14:08.412142    7592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0604 16:14:08.415186    7592 config.go:178] Loaded profile config "NoKubernetes-20220604161047-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0604 16:14:08.415842    7592 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:14:08.415842    7592 config.go:178] Loaded profile config "running-upgrade-20220604161047-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0604 16:14:08.416573    7592 config.go:178] Loaded profile config "stopped-upgrade-20220604161047-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0604 16:14:08.416573    7592 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:14:08.455850    7592 preload.go:306] deleting older generation preload C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.checksum
	I0604 16:14:11.119545    7592 docker.go:137] docker version: linux-20.10.16
	I0604 16:14:11.127468    7592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:14:13.155388    7592 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0277927s)
	I0604 16:14:13.156072    7592 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:14:12.161408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:14:13.160173    7592 out.go:177] * Using the docker driver based on user configuration
	I0604 16:14:13.163863    7592 start.go:284] selected driver: docker
	I0604 16:14:13.163863    7592 start.go:806] validating driver "docker" against <nil>
	I0604 16:14:13.163863    7592 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:14:13.307887    7592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:14:15.389212    7592 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0813018s)
	I0604 16:14:15.389212    7592 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:47 OomKillDisable:true NGoroutines:51 SystemTime:2022-06-04 16:14:14.3598542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:14:15.389741    7592 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:14:15.390044    7592 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0604 16:14:15.392200    7592 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:14:15.394516    7592 cni.go:95] Creating CNI manager for ""
	I0604 16:14:15.394516    7592 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:14:15.394516    7592 start_flags.go:306] config:
	{Name:force-systemd-env-20220604161407-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-env-20220604161407-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:14:15.396915    7592 out.go:177] * Starting control plane node force-systemd-env-20220604161407-5712 in cluster force-systemd-env-20220604161407-5712
	I0604 16:14:15.402210    7592 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:14:15.404555    7592 out.go:177] * Pulling base image ...
	I0604 16:14:15.407737    7592 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:14:15.407737    7592 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:14:15.407737    7592 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:14:15.407737    7592 cache.go:57] Caching tarball of preloaded images
	I0604 16:14:15.408487    7592 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:14:15.408487    7592 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:14:15.408487    7592 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-20220604161407-5712\config.json ...
	I0604 16:14:15.409142    7592 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-20220604161407-5712\config.json: {Name:mka12cea8b75e7bbc66d135083800603969a53da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:14:16.502292    7592 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:14:16.502475    7592 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:14:16.502812    7592 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:14:16.502934    7592 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:14:16.503171    7592 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:14:16.503268    7592 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:14:16.503473    7592 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:14:16.503473    7592 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:14:16.503473    7592 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:14:18.796005    7592 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-23371531: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-23371531: read-only file system"}
	I0604 16:14:18.796005    7592 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:14:18.796005    7592 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:14:18.796005    7592 start.go:352] acquiring machines lock for force-systemd-env-20220604161407-5712: {Name:mkdace1fb412360984cf3efaafb44e6e59745a15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:14:18.796563    7592 start.go:356] acquired machines lock for "force-systemd-env-20220604161407-5712" in 0s
	I0604 16:14:18.796731    7592 start.go:91] Provisioning new machine with config: &{Name:force-systemd-env-20220604161407-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-env-20220604161407-5
712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:14:18.796731    7592 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:14:18.802303    7592 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:14:18.802303    7592 start.go:165] libmachine.API.Create for "force-systemd-env-20220604161407-5712" (driver="docker")
	I0604 16:14:18.803130    7592 client.go:168] LocalClient.Create starting
	I0604 16:14:18.803130    7592 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:14:18.803876    7592 main.go:134] libmachine: Decoding PEM data...
	I0604 16:14:18.803876    7592 main.go:134] libmachine: Parsing certificate...
	I0604 16:14:18.803876    7592 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:14:18.803876    7592 main.go:134] libmachine: Decoding PEM data...
	I0604 16:14:18.803876    7592 main.go:134] libmachine: Parsing certificate...
	I0604 16:14:18.813873    7592 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:14:19.959922    7592 cli_runner.go:211] docker network inspect force-systemd-env-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:14:19.960085    7592 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1460366s)
	I0604 16:14:19.967280    7592 network_create.go:272] running [docker network inspect force-systemd-env-20220604161407-5712] to gather additional debugging logs...
	I0604 16:14:19.967280    7592 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220604161407-5712
	W0604 16:14:21.029652    7592 cli_runner.go:211] docker network inspect force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:14:21.029652    7592 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220604161407-5712: (1.0623191s)
	I0604 16:14:21.029766    7592 network_create.go:275] error running [docker network inspect force-systemd-env-20220604161407-5712]: docker network inspect force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220604161407-5712
	I0604 16:14:21.029766    7592 network_create.go:277] output of [docker network inspect force-systemd-env-20220604161407-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220604161407-5712
	
	** /stderr **
	I0604 16:14:21.041886    7592 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:14:22.074855    7592 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0329576s)
	I0604 16:14:22.094867    7592 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000fa02b8] misses:0}
	I0604 16:14:22.094867    7592 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:14:22.094867    7592 network_create.go:115] attempt to create docker network force-systemd-env-20220604161407-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:14:22.101923    7592 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220604161407-5712
	W0604 16:14:23.123733    7592 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:14:23.128706    7592 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220604161407-5712: (1.0217992s)
	E0604 16:14:23.128706    7592 network_create.go:104] error while trying to create docker network force-systemd-env-20220604161407-5712 192.168.49.0/24: create docker network force-systemd-env-20220604161407-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3b78957d61d22bd3c629fb8ad397ebfeb15859b5baf83967e607588c8f1a0435 (br-3b78957d61d2): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:14:23.128706    7592 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220604161407-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3b78957d61d22bd3c629fb8ad397ebfeb15859b5baf83967e607588c8f1a0435 (br-3b78957d61d2): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220604161407-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3b78957d61d22bd3c629fb8ad397ebfeb15859b5baf83967e607588c8f1a0435 (br-3b78957d61d2): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:14:23.142958    7592 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:14:24.197141    7592 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0541717s)
	I0604 16:14:24.203167    7592 cli_runner.go:164] Run: docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:14:25.300031    7592 cli_runner.go:211] docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:14:25.300031    7592 cli_runner.go:217] Completed: docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0957703s)
	I0604 16:14:25.300031    7592 client.go:171] LocalClient.Create took 6.4968306s
	I0604 16:14:27.306364    7592 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:14:27.319638    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712
	W0604 16:14:28.354614    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:14:28.354614    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: (1.0347985s)
	I0604 16:14:28.354834    7592 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:28.646715    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712
	W0604 16:14:29.726494    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:14:29.726526    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: (1.0795867s)
	W0604 16:14:29.726725    7592 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	
	W0604 16:14:29.726788    7592 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:29.738028    7592 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:14:29.746496    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712
	W0604 16:14:30.781760    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:14:30.781760    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: (1.0352522s)
	I0604 16:14:30.781760    7592 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:31.090358    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712
	W0604 16:14:32.100322    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:14:32.100322    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: (1.0099528s)
	W0604 16:14:32.100322    7592 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	
	W0604 16:14:32.100322    7592 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:32.100322    7592 start.go:134] duration metric: createHost completed in 13.3034456s
	I0604 16:14:32.100322    7592 start.go:81] releasing machines lock for "force-systemd-env-20220604161407-5712", held for 13.3036141s
	W0604 16:14:32.100322    7592 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220604161407-5712 container: docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220604161407-5712': mkdir /var/lib/docker/volumes/force-systemd-env-20220604161407-5712: read-only file system
	I0604 16:14:32.115249    7592 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}
	W0604 16:14:33.172360    7592 cli_runner.go:211] docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:14:33.172360    7592 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: (1.0569646s)
	I0604 16:14:33.172360    7592 delete.go:82] Unable to get host status for force-systemd-env-20220604161407-5712, assuming it has already been deleted: state: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	W0604 16:14:33.172360    7592 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220604161407-5712 container: docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220604161407-5712': mkdir /var/lib/docker/volumes/force-systemd-env-20220604161407-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220604161407-5712 container: docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220604161407-5712': mkdir /var/lib/docker/volumes/force-systemd-env-20220604161407-5712: read-only file system
	
	I0604 16:14:33.172360    7592 start.go:614] Will try again in 5 seconds ...
	I0604 16:14:38.179060    7592 start.go:352] acquiring machines lock for force-systemd-env-20220604161407-5712: {Name:mkdace1fb412360984cf3efaafb44e6e59745a15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:14:38.179372    7592 start.go:356] acquired machines lock for "force-systemd-env-20220604161407-5712" in 154.2µs
	I0604 16:14:38.179372    7592 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:14:38.179372    7592 fix.go:55] fixHost starting: 
	I0604 16:14:38.195028    7592 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}
	W0604 16:14:39.237920    7592 cli_runner.go:211] docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:14:39.237920    7592 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: (1.0427439s)
	I0604 16:14:39.237920    7592 fix.go:103] recreateIfNeeded on force-systemd-env-20220604161407-5712: state= err=unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:39.237920    7592 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:14:39.242006    7592 out.go:177] * docker "force-systemd-env-20220604161407-5712" container is missing, will recreate.
	I0604 16:14:39.244604    7592 delete.go:124] DEMOLISHING force-systemd-env-20220604161407-5712 ...
	I0604 16:14:39.256013    7592 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}
	W0604 16:14:40.305968    7592 cli_runner.go:211] docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:14:40.305968    7592 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: (1.0489441s)
	W0604 16:14:40.305968    7592 stop.go:75] unable to get state: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:40.305968    7592 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:40.319976    7592 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}
	W0604 16:14:41.382622    7592 cli_runner.go:211] docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:14:41.382622    7592 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: (1.0626349s)
	I0604 16:14:41.382622    7592 delete.go:82] Unable to get host status for force-systemd-env-20220604161407-5712, assuming it has already been deleted: state: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:41.389963    7592 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-20220604161407-5712
	W0604 16:14:42.429600    7592 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:14:42.429600    7592 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} force-systemd-env-20220604161407-5712: (1.0396256s)
	I0604 16:14:42.429600    7592 kic.go:356] could not find the container force-systemd-env-20220604161407-5712 to remove it. will try anyways
	I0604 16:14:42.435600    7592 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}
	W0604 16:14:43.475543    7592 cli_runner.go:211] docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:14:43.475597    7592 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: (1.0398s)
	W0604 16:14:43.475597    7592 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:43.483997    7592 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-20220604161407-5712 /bin/bash -c "sudo init 0"
	W0604 16:14:44.516398    7592 cli_runner.go:211] docker exec --privileged -t force-systemd-env-20220604161407-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:14:44.516398    7592 cli_runner.go:217] Completed: docker exec --privileged -t force-systemd-env-20220604161407-5712 /bin/bash -c "sudo init 0": (1.0318753s)
	I0604 16:14:44.516398    7592 oci.go:625] error shutdown force-systemd-env-20220604161407-5712: docker exec --privileged -t force-systemd-env-20220604161407-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:45.526352    7592 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}
	W0604 16:14:46.570783    7592 cli_runner.go:211] docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:14:46.570921    7592 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: (1.0442602s)
	I0604 16:14:46.570921    7592 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:46.570921    7592 oci.go:639] temporary error: container force-systemd-env-20220604161407-5712 status is  but expect it to be exited
	I0604 16:14:46.570921    7592 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:47.067168    7592 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}
	W0604 16:14:48.123599    7592 cli_runner.go:211] docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:14:48.123599    7592 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: (1.0564195s)
	I0604 16:14:48.123599    7592 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:48.127615    7592 oci.go:639] temporary error: container force-systemd-env-20220604161407-5712 status is  but expect it to be exited
	I0604 16:14:48.127615    7592 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:49.037305    7592 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}
	W0604 16:14:50.091789    7592 cli_runner.go:211] docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:14:50.091789    7592 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: (1.0544729s)
	I0604 16:14:50.091789    7592 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:50.091789    7592 oci.go:639] temporary error: container force-systemd-env-20220604161407-5712 status is  but expect it to be exited
	I0604 16:14:50.091789    7592 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:50.742246    7592 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}
	W0604 16:14:51.825380    7592 cli_runner.go:211] docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:14:51.825380    7592 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: (1.0831217s)
	I0604 16:14:51.825380    7592 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:51.825380    7592 oci.go:639] temporary error: container force-systemd-env-20220604161407-5712 status is  but expect it to be exited
	I0604 16:14:51.825380    7592 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:52.942913    7592 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}
	W0604 16:14:54.001849    7592 cli_runner.go:211] docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:14:54.002104    7592 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: (1.0589239s)
	I0604 16:14:54.002162    7592 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:54.002209    7592 oci.go:639] temporary error: container force-systemd-env-20220604161407-5712 status is  but expect it to be exited
	I0604 16:14:54.002250    7592 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:55.522439    7592 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}
	W0604 16:14:56.588425    7592 cli_runner.go:211] docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:14:56.588458    7592 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: (1.0651736s)
	I0604 16:14:56.588568    7592 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:56.588568    7592 oci.go:639] temporary error: container force-systemd-env-20220604161407-5712 status is  but expect it to be exited
	I0604 16:14:56.588606    7592 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:14:59.648162    7592 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}
	W0604 16:15:00.686885    7592 cli_runner.go:211] docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:15:00.687149    7592 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: (1.0387118s)
	I0604 16:15:00.687230    7592 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:15:00.687230    7592 oci.go:639] temporary error: container force-systemd-env-20220604161407-5712 status is  but expect it to be exited
	I0604 16:15:00.687230    7592 oci.go:88] couldn't shut down force-systemd-env-20220604161407-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	 
	I0604 16:15:00.695167    7592 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-20220604161407-5712
	I0604 16:15:01.758908    7592 cli_runner.go:217] Completed: docker rm -f -v force-systemd-env-20220604161407-5712: (1.0637293s)
	I0604 16:15:01.765826    7592 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-20220604161407-5712
	W0604 16:15:02.874415    7592 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:15:02.874444    7592 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} force-systemd-env-20220604161407-5712: (1.1083998s)
	I0604 16:15:02.883634    7592 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:15:03.977029    7592 cli_runner.go:211] docker network inspect force-systemd-env-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:15:03.977029    7592 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0933831s)
	I0604 16:15:03.984255    7592 network_create.go:272] running [docker network inspect force-systemd-env-20220604161407-5712] to gather additional debugging logs...
	I0604 16:15:03.984255    7592 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220604161407-5712
	W0604 16:15:05.106175    7592 cli_runner.go:211] docker network inspect force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:15:05.106355    7592 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220604161407-5712: (1.1219076s)
	I0604 16:15:05.106411    7592 network_create.go:275] error running [docker network inspect force-systemd-env-20220604161407-5712]: docker network inspect force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220604161407-5712
	I0604 16:15:05.106454    7592 network_create.go:277] output of [docker network inspect force-systemd-env-20220604161407-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220604161407-5712
	
	** /stderr **
	W0604 16:15:05.107687    7592 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:15:05.107718    7592 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:15:06.117784    7592 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:15:06.124083    7592 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:15:06.124083    7592 start.go:165] libmachine.API.Create for "force-systemd-env-20220604161407-5712" (driver="docker")
	I0604 16:15:06.124083    7592 client.go:168] LocalClient.Create starting
	I0604 16:15:06.124976    7592 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:15:06.125266    7592 main.go:134] libmachine: Decoding PEM data...
	I0604 16:15:06.125385    7592 main.go:134] libmachine: Parsing certificate...
	I0604 16:15:06.125542    7592 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:15:06.125783    7592 main.go:134] libmachine: Decoding PEM data...
	I0604 16:15:06.125783    7592 main.go:134] libmachine: Parsing certificate...
	I0604 16:15:06.133541    7592 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:15:07.262326    7592 cli_runner.go:211] docker network inspect force-systemd-env-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:15:07.262405    7592 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1284876s)
	I0604 16:15:07.269281    7592 network_create.go:272] running [docker network inspect force-systemd-env-20220604161407-5712] to gather additional debugging logs...
	I0604 16:15:07.269281    7592 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220604161407-5712
	W0604 16:15:08.393003    7592 cli_runner.go:211] docker network inspect force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:15:08.393197    7592 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220604161407-5712: (1.1235197s)
	I0604 16:15:08.393255    7592 network_create.go:275] error running [docker network inspect force-systemd-env-20220604161407-5712]: docker network inspect force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220604161407-5712
	I0604 16:15:08.393255    7592 network_create.go:277] output of [docker network inspect force-systemd-env-20220604161407-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220604161407-5712
	
	** /stderr **
	I0604 16:15:08.400310    7592 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:15:09.558312    7592 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1575872s)
	I0604 16:15:09.577458    7592 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000fa02b8] amended:false}} dirty:map[] misses:0}
	I0604 16:15:09.578333    7592 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:15:09.596417    7592 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000fa02b8] amended:true}} dirty:map[192.168.49.0:0xc000fa02b8 192.168.58.0:0xc0006d0440] misses:0}
	I0604 16:15:09.596417    7592 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:15:09.596417    7592 network_create.go:115] attempt to create docker network force-systemd-env-20220604161407-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:15:09.604683    7592 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220604161407-5712
	W0604 16:15:10.739350    7592 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:15:10.739409    7592 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220604161407-5712: (1.1345251s)
	E0604 16:15:10.739409    7592 network_create.go:104] error while trying to create docker network force-systemd-env-20220604161407-5712 192.168.58.0/24: create docker network force-systemd-env-20220604161407-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dbc21085311b03003b3653edc3a0d4255838822978ec2581d5a1983528b4df99 (br-dbc21085311b): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:15:10.739409    7592 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220604161407-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dbc21085311b03003b3653edc3a0d4255838822978ec2581d5a1983528b4df99 (br-dbc21085311b): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220604161407-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dbc21085311b03003b3653edc3a0d4255838822978ec2581d5a1983528b4df99 (br-dbc21085311b): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:15:10.753781    7592 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:15:11.797195    7592 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0432428s)
	I0604 16:15:11.804900    7592 cli_runner.go:164] Run: docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:15:12.857192    7592 cli_runner.go:211] docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:15:12.857192    7592 cli_runner.go:217] Completed: docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0522498s)
	I0604 16:15:12.857192    7592 client.go:171] LocalClient.Create took 6.7330362s
	I0604 16:15:14.874382    7592 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:15:14.881821    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712
	W0604 16:15:15.902475    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:15:15.902475    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: (1.0206429s)
	I0604 16:15:15.902475    7592 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:15:16.245108    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712
	W0604 16:15:17.299435    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:15:17.299435    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: (1.0543158s)
	W0604 16:15:17.299435    7592 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	
	W0604 16:15:17.299435    7592 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:15:17.310234    7592 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:15:17.318050    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712
	W0604 16:15:18.437352    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:15:18.437352    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: (1.1192101s)
	I0604 16:15:18.437731    7592 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:15:18.667376    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712
	W0604 16:15:19.794258    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:15:19.794377    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: (1.1266794s)
	W0604 16:15:19.794548    7592 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	
	W0604 16:15:19.794601    7592 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:15:19.794601    7592 start.go:134] duration metric: createHost completed in 13.6765972s
	I0604 16:15:19.805229    7592 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:15:19.812065    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712
	W0604 16:15:20.911012    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:15:20.911276    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: (1.0989353s)
	I0604 16:15:20.911496    7592 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:15:21.170226    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712
	W0604 16:15:22.256905    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:15:22.256905    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: (1.0866673s)
	W0604 16:15:22.256905    7592 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	
	W0604 16:15:22.256905    7592 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:15:22.266901    7592 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:15:22.272913    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712
	W0604 16:15:23.314757    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:15:23.314757    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: (1.0416746s)
	I0604 16:15:23.314757    7592 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:15:23.530518    7592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712
	W0604 16:15:24.645079    7592 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712 returned with exit code 1
	I0604 16:15:24.645079    7592 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: (1.1145492s)
	W0604 16:15:24.645079    7592 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	
	W0604 16:15:24.645079    7592 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	I0604 16:15:24.645079    7592 fix.go:57] fixHost completed within 46.4652011s
	I0604 16:15:24.645079    7592 start.go:81] releasing machines lock for "force-systemd-env-20220604161407-5712", held for 46.4652011s
	W0604 16:15:24.646020    7592 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-20220604161407-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220604161407-5712 container: docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220604161407-5712': mkdir /var/lib/docker/volumes/force-systemd-env-20220604161407-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-20220604161407-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220604161407-5712 container: docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220604161407-5712': mkdir /var/lib/docker/volumes/force-systemd-env-20220604161407-5712: read-only file system
	
	I0604 16:15:24.651120    7592 out.go:177] 
	W0604 16:15:24.653282    7592 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220604161407-5712 container: docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220604161407-5712': mkdir /var/lib/docker/volumes/force-systemd-env-20220604161407-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220604161407-5712 container: docker volume create force-systemd-env-20220604161407-5712 --label name.minikube.sigs.k8s.io=force-systemd-env-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220604161407-5712': mkdir /var/lib/docker/volumes/force-systemd-env-20220604161407-5712: read-only file system
	
	W0604 16:15:24.653282    7592 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:15:24.653282    7592 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:15:24.659988    7592 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:152: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-env-20220604161407-5712 --memory=2048 --alsologtostderr -v=5 --driver=docker" : exit status 60
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20220604161407-5712 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-env-20220604161407-5712 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (3.3142906s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2837ebd22544166cf14c5e2e977cc80019e59e54_2.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-env-20220604161407-5712 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:161: *** TestForceSystemdEnv FAILED at 2022-06-04 16:15:28.0966522 +0000 GMT m=+3333.289058001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-20220604161407-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect force-systemd-env-20220604161407-5712: exit status 1 (1.0977645s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: force-systemd-env-20220604161407-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20220604161407-5712 -n force-systemd-env-20220604161407-5712

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20220604161407-5712 -n force-systemd-env-20220604161407-5712: exit status 7 (2.9588216s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:15:32.129590    3116 status.go:247] status error: host: state: unknown state "force-systemd-env-20220604161407-5712": docker container inspect force-systemd-env-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220604161407-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-20220604161407-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-20220604161407-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20220604161407-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20220604161407-5712: (8.653185s)
--- FAIL: TestForceSystemdEnv (92.97s)

                                                
                                    
x
+
TestErrorSpam/setup (74.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20220604152324-5712 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 --driver=docker
error_spam_test.go:78: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p nospam-20220604152324-5712 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 --driver=docker: exit status 60 (1m14.1134949s)

                                                
                                                
-- stdout --
	* [nospam-20220604152324-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node nospam-20220604152324-5712 in cluster nospam-20220604152324-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	* docker "nospam-20220604152324-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:23:38.894097     748 network_create.go:104] error while trying to create docker network nospam-20220604152324-5712 192.168.49.0/24: create docker network nospam-20220604152324-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220604152324-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fc01243c0f01bc2afb17346d603fa7b39b9e72813c3bb926f913798769a422ba (br-fc01243c0f01): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220604152324-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220604152324-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fc01243c0f01bc2afb17346d603fa7b39b9e72813c3bb926f913798769a422ba (br-fc01243c0f01): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for nospam-20220604152324-5712 container: docker volume create nospam-20220604152324-5712 --label name.minikube.sigs.k8s.io=nospam-20220604152324-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create nospam-20220604152324-5712: error while creating volume root path '/var/lib/docker/volumes/nospam-20220604152324-5712': mkdir /var/lib/docker/volumes/nospam-20220604152324-5712: read-only file system
	
	E0604 15:24:25.593376     748 network_create.go:104] error while trying to create docker network nospam-20220604152324-5712 192.168.58.0/24: create docker network nospam-20220604152324-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220604152324-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9827c51b2485138e5a6c24808bb790a48b9600fcce8dbcddb14d05c4a9083ff9 (br-9827c51b2485): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220604152324-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220604152324-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9827c51b2485138e5a6c24808bb790a48b9600fcce8dbcddb14d05c4a9083ff9 (br-9827c51b2485): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p nospam-20220604152324-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220604152324-5712 container: docker volume create nospam-20220604152324-5712 --label name.minikube.sigs.k8s.io=nospam-20220604152324-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create nospam-20220604152324-5712: error while creating volume root path '/var/lib/docker/volumes/nospam-20220604152324-5712': mkdir /var/lib/docker/volumes/nospam-20220604152324-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220604152324-5712 container: docker volume create nospam-20220604152324-5712 --label name.minikube.sigs.k8s.io=nospam-20220604152324-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create nospam-20220604152324-5712: error while creating volume root path '/var/lib/docker/volumes/nospam-20220604152324-5712': mkdir /var/lib/docker/volumes/nospam-20220604152324-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
error_spam_test.go:80: "out/minikube-windows-amd64.exe start -p nospam-20220604152324-5712 -n=1 --memory=2250 --wait=false --log_dir=C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 --driver=docker" failed: exit status 60
error_spam_test.go:93: unexpected stderr: "E0604 15:23:38.894097     748 network_create.go:104] error while trying to create docker network nospam-20220604152324-5712 192.168.49.0/24: create docker network nospam-20220604152324-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220604152324-5712: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: cannot create network fc01243c0f01bc2afb17346d603fa7b39b9e72813c3bb926f913798769a422ba (br-fc01243c0f01): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4"
error_spam_test.go:93: unexpected stderr: "! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220604152324-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220604152324-5712: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: cannot create network fc01243c0f01bc2afb17346d603fa7b39b9e72813c3bb926f913798769a422ba (br-fc01243c0f01): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4"
error_spam_test.go:93: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for nospam-20220604152324-5712 container: docker volume create nospam-20220604152324-5712 --label name.minikube.sigs.k8s.io=nospam-20220604152324-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: create nospam-20220604152324-5712: error while creating volume root path '/var/lib/docker/volumes/nospam-20220604152324-5712': mkdir /var/lib/docker/volumes/nospam-20220604152324-5712: read-only file system"
error_spam_test.go:93: unexpected stderr: "E0604 15:24:25.593376     748 network_create.go:104] error while trying to create docker network nospam-20220604152324-5712 192.168.58.0/24: create docker network nospam-20220604152324-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220604152324-5712: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: cannot create network 9827c51b2485138e5a6c24808bb790a48b9600fcce8dbcddb14d05c4a9083ff9 (br-9827c51b2485): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4"
error_spam_test.go:93: unexpected stderr: "! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220604152324-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220604152324-5712: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: cannot create network 9827c51b2485138e5a6c24808bb790a48b9600fcce8dbcddb14d05c4a9083ff9 (br-9827c51b2485): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4"
error_spam_test.go:93: unexpected stderr: "* Failed to start docker container. Running \"minikube delete -p nospam-20220604152324-5712\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220604152324-5712 container: docker volume create nospam-20220604152324-5712 --label name.minikube.sigs.k8s.io=nospam-20220604152324-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: create nospam-20220604152324-5712: error while creating volume root path '/var/lib/docker/volumes/nospam-20220604152324-5712': mkdir /var/lib/docker/volumes/nospam-20220604152324-5712: read-only file system"
error_spam_test.go:93: unexpected stderr: "X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220604152324-5712 container: docker volume create nospam-20220604152324-5712 --label name.minikube.sigs.k8s.io=nospam-20220604152324-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: create nospam-20220604152324-5712: error while creating volume root path '/var/lib/docker/volumes/nospam-20220604152324-5712': mkdir /var/lib/docker/volumes/nospam-20220604152324-5712: read-only file system"
error_spam_test.go:93: unexpected stderr: "* Suggestion: Restart Docker"
error_spam_test.go:93: unexpected stderr: "* Related issue: https://github.com/kubernetes/minikube/issues/6825"
error_spam_test.go:107: minikube stdout:
* [nospam-20220604152324-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=14123
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with the root privilege
* Starting control plane node nospam-20220604152324-5712 in cluster nospam-20220604152324-5712
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* docker "nospam-20220604152324-5712" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=2250MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:108: minikube stderr:
E0604 15:23:38.894097     748 network_create.go:104] error while trying to create docker network nospam-20220604152324-5712 192.168.49.0/24: create docker network nospam-20220604152324-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220604152324-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network fc01243c0f01bc2afb17346d603fa7b39b9e72813c3bb926f913798769a422ba (br-fc01243c0f01): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220604152324-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220604152324-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network fc01243c0f01bc2afb17346d603fa7b39b9e72813c3bb926f913798769a422ba (br-fc01243c0f01): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4

                                                
                                                
! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for nospam-20220604152324-5712 container: docker volume create nospam-20220604152324-5712 --label name.minikube.sigs.k8s.io=nospam-20220604152324-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create nospam-20220604152324-5712: error while creating volume root path '/var/lib/docker/volumes/nospam-20220604152324-5712': mkdir /var/lib/docker/volumes/nospam-20220604152324-5712: read-only file system

                                                
                                                
E0604 15:24:25.593376     748 network_create.go:104] error while trying to create docker network nospam-20220604152324-5712 192.168.58.0/24: create docker network nospam-20220604152324-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220604152324-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 9827c51b2485138e5a6c24808bb790a48b9600fcce8dbcddb14d05c4a9083ff9 (br-9827c51b2485): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220604152324-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220604152324-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 9827c51b2485138e5a6c24808bb790a48b9600fcce8dbcddb14d05c4a9083ff9 (br-9827c51b2485): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4

                                                
                                                
* Failed to start docker container. Running "minikube delete -p nospam-20220604152324-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220604152324-5712 container: docker volume create nospam-20220604152324-5712 --label name.minikube.sigs.k8s.io=nospam-20220604152324-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create nospam-20220604152324-5712: error while creating volume root path '/var/lib/docker/volumes/nospam-20220604152324-5712': mkdir /var/lib/docker/volumes/nospam-20220604152324-5712: read-only file system

                                                
                                                
X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220604152324-5712 container: docker volume create nospam-20220604152324-5712 --label name.minikube.sigs.k8s.io=nospam-20220604152324-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create nospam-20220604152324-5712: error while creating volume root path '/var/lib/docker/volumes/nospam-20220604152324-5712': mkdir /var/lib/docker/volumes/nospam-20220604152324-5712: read-only file system

                                                
                                                
* Suggestion: Restart Docker
* Related issue: https://github.com/kubernetes/minikube/issues/6825
error_spam_test.go:118: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:118: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:118: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (74.12s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
functional_test.go:2160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: exit status 60 (1m14.2028826s)

                                                
                                                
-- stdout --
	* [functional-20220604152644-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node functional-20220604152644-5712 in cluster functional-20220604152644-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20220604152644-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54044 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54044 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54044 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54044 to docker env.
	E0604 15:26:58.836988    7224 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.49.0/24: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6e1c39fd97a3324eff53caff814118af60389f07888d47d3a75c62608f116011 (br-6e1c39fd97a3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6e1c39fd97a3324eff53caff814118af60389f07888d47d3a75c62608f116011 (br-6e1c39fd97a3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54044 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54044 to docker env.
	E0604 15:27:45.488971    7224 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.58.0/24: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7991f08a6b755d93d1fffb7c87c79614ac775a897d38af2b86de6fd0d3ef7ce5 (br-7991f08a6b75): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7991f08a6b755d93d1fffb7c87c79614ac775a897d38af2b86de6fd0d3ef7ce5 (br-7991f08a6b75): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p functional-20220604152644-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
functional_test.go:2162: failed minikube start. args "out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker": exit status 60
functional_test.go:2167: start stdout=* [functional-20220604152644-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=14123
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with the root privilege
* Starting control plane node functional-20220604152644-5712 in cluster functional-20220604152644-5712
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=4000MB) ...
* docker "functional-20220604152644-5712" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=4000MB) ...

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2172: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:54044 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:54044 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:54044 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:54044 to docker env.
E0604 15:26:58.836988    7224 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.49.0/24: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 6e1c39fd97a3324eff53caff814118af60389f07888d47d3a75c62608f116011 (br-6e1c39fd97a3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 6e1c39fd97a3324eff53caff814118af60389f07888d47d3a75c62608f116011 (br-6e1c39fd97a3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4

                                                
                                                
! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system

                                                
                                                
! Local proxy ignored: not passing HTTP_PROXY=localhost:54044 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:54044 to docker env.
E0604 15:27:45.488971    7224 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.58.0/24: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 7991f08a6b755d93d1fffb7c87c79614ac775a897d38af2b86de6fd0d3ef7ce5 (br-7991f08a6b75): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 7991f08a6b755d93d1fffb7c87c79614ac775a897d38af2b86de6fd0d3ef7ce5 (br-7991f08a6b75): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4

                                                
                                                
* Failed to start docker container. Running "minikube delete -p functional-20220604152644-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system

                                                
                                                
X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system

                                                
                                                
* Suggestion: Restart Docker
* Related issue: https://github.com/kubernetes/minikube/issues/6825
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.0661568s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (2.7669148s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:28:02.588304    1324 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/StartWithProxy (78.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
functional_test.go:630: audit.json does not contain the profile "functional-20220604152644-5712"
--- FAIL: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (113.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --alsologtostderr -v=8
functional_test.go:651: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --alsologtostderr -v=8: exit status 60 (1m49.3732793s)

                                                
                                                
-- stdout --
	* [functional-20220604152644-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20220604152644-5712 in cluster functional-20220604152644-5712
	* Pulling base image ...
	* docker "functional-20220604152644-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20220604152644-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:28:02.835480    7572 out.go:296] Setting OutFile to fd 784 ...
	I0604 15:28:02.891348    7572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:28:02.891348    7572 out.go:309] Setting ErrFile to fd 732...
	I0604 15:28:02.891348    7572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:28:02.905744    7572 out.go:303] Setting JSON to false
	I0604 15:28:02.908683    7572 start.go:115] hostinfo: {"hostname":"minikube2","uptime":7554,"bootTime":1654348928,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 15:28:02.908683    7572 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 15:28:02.931549    7572 out.go:177] * [functional-20220604152644-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 15:28:02.935413    7572 notify.go:193] Checking for updates...
	I0604 15:28:02.937662    7572 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 15:28:02.939944    7572 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 15:28:02.941835    7572 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 15:28:02.944086    7572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 15:28:02.946821    7572 config.go:178] Loaded profile config "functional-20220604152644-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 15:28:02.948024    7572 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 15:28:05.561875    7572 docker.go:137] docker version: linux-20.10.16
	I0604 15:28:05.569254    7572 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:28:07.538447    7572 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9689134s)
	I0604 15:28:07.539161    7572 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-04 15:28:06.570034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:28:07.550388    7572 out.go:177] * Using the docker driver based on existing profile
	I0604 15:28:07.553566    7572 start.go:284] selected driver: docker
	I0604 15:28:07.554296    7572 start.go:806] validating driver "docker" against &{Name:functional-20220604152644-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220604152644-5712 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 15:28:07.554679    7572 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 15:28:07.575333    7572 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:28:09.548691    7572 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9733392s)
	I0604 15:28:09.548691    7572 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-04 15:28:08.5754115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:28:09.594515    7572 cni.go:95] Creating CNI manager for ""
	I0604 15:28:09.594596    7572 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 15:28:09.594596    7572 start_flags.go:306] config:
	{Name:functional-20220604152644-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220604152644-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 15:28:09.613187    7572 out.go:177] * Starting control plane node functional-20220604152644-5712 in cluster functional-20220604152644-5712
	I0604 15:28:09.615189    7572 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 15:28:09.618930    7572 out.go:177] * Pulling base image ...
	I0604 15:28:09.622762    7572 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 15:28:09.622762    7572 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 15:28:09.622762    7572 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 15:28:09.622762    7572 cache.go:57] Caching tarball of preloaded images
	I0604 15:28:09.623380    7572 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 15:28:09.623683    7572 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 15:28:09.623889    7572 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220604152644-5712\config.json ...
	I0604 15:28:10.692124    7572 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 15:28:10.692124    7572 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:28:10.692124    7572 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:28:10.692124    7572 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 15:28:10.692654    7572 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 15:28:10.692654    7572 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 15:28:10.692896    7572 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 15:28:10.692896    7572 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 15:28:10.692985    7572 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:28:12.900079    7572 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-3860750831: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-3860750831: read-only file system"}
	I0604 15:28:12.900079    7572 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 15:28:12.900079    7572 cache.go:206] Successfully downloaded all kic artifacts
	I0604 15:28:12.900079    7572 start.go:352] acquiring machines lock for functional-20220604152644-5712: {Name:mkd8e5d21c30b3e319f1bf6be936dc9c23190696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 15:28:12.900706    7572 start.go:356] acquired machines lock for "functional-20220604152644-5712" in 537.6µs
	I0604 15:28:12.900985    7572 start.go:94] Skipping create...Using existing machine configuration
	I0604 15:28:12.901076    7572 fix.go:55] fixHost starting: 
	I0604 15:28:12.920381    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:28:13.936342    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:28:13.936342    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.015951s)
	I0604 15:28:13.936342    7572 fix.go:103] recreateIfNeeded on functional-20220604152644-5712: state= err=unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:13.936342    7572 fix.go:108] machineExists: false. err=machine does not exist
	I0604 15:28:13.941925    7572 out.go:177] * docker "functional-20220604152644-5712" container is missing, will recreate.
	I0604 15:28:13.944801    7572 delete.go:124] DEMOLISHING functional-20220604152644-5712 ...
	I0604 15:28:13.959860    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:28:14.990197    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:28:14.990197    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0303267s)
	W0604 15:28:14.990197    7572 stop.go:75] unable to get state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:14.990197    7572 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:15.006343    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:28:16.004622    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:28:16.004622    7572 delete.go:82] Unable to get host status for functional-20220604152644-5712, assuming it has already been deleted: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:16.015089    7572 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
	W0604 15:28:16.997151    7572 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
	I0604 15:28:16.997292    7572 kic.go:356] could not find the container functional-20220604152644-5712 to remove it. will try anyways
	I0604 15:28:17.004853    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:28:17.987628    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	W0604 15:28:17.987628    7572 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:17.996590    7572 cli_runner.go:164] Run: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0"
	W0604 15:28:19.013057    7572 cli_runner.go:211] docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 15:28:19.013057    7572 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": (1.0158744s)
	I0604 15:28:19.013057    7572 oci.go:625] error shutdown functional-20220604152644-5712: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:20.021583    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:28:21.032363    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:28:21.032363    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0107697s)
	I0604 15:28:21.032363    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:21.032363    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:28:21.032363    7572 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:21.608288    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:28:22.614753    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:28:22.614868    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0064549s)
	I0604 15:28:22.614906    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:22.615087    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:28:22.615193    7572 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:23.720010    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:28:24.749536    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:28:24.749571    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0293955s)
	I0604 15:28:24.749660    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:24.749820    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:28:24.749860    7572 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:26.084828    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:28:27.075511    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:28:27.075626    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:27.075626    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:28:27.075626    7572 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:28.675362    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:28:29.699360    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:28:29.699360    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0237201s)
	I0604 15:28:29.699360    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:29.699360    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:28:29.699360    7572 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:32.061922    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:28:33.086448    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:28:33.086448    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0245158s)
	I0604 15:28:33.086448    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:33.086448    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:28:33.086448    7572 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:37.616206    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:28:38.641560    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:28:38.641827    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0253443s)
	I0604 15:28:38.641827    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:38.641827    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:28:38.641827    7572 oci.go:88] couldn't shut down functional-20220604152644-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	 
	I0604 15:28:38.641827    7572 cli_runner.go:164] Run: docker rm -f -v functional-20220604152644-5712
	I0604 15:28:39.687301    7572 cli_runner.go:217] Completed: docker rm -f -v functional-20220604152644-5712: (1.0454635s)
	I0604 15:28:39.695708    7572 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
	W0604 15:28:40.700614    7572 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
	I0604 15:28:40.700686    7572 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0047598s)
	I0604 15:28:40.710049    7572 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:28:41.717202    7572 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:28:41.717337    7572 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0069852s)
	I0604 15:28:41.726720    7572 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
	I0604 15:28:41.726720    7572 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
	W0604 15:28:42.717544    7572 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
	I0604 15:28:42.717544    7572 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220604152644-5712
	I0604 15:28:42.717544    7572 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220604152644-5712
	
	** /stderr **
	W0604 15:28:42.718448    7572 delete.go:139] delete failed (probably ok) <nil>
	I0604 15:28:42.718448    7572 fix.go:115] Sleeping 1 second for extra luck!
	I0604 15:28:43.729621    7572 start.go:131] createHost starting for "" (driver="docker")
	I0604 15:28:43.733624    7572 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0604 15:28:43.733999    7572 start.go:165] libmachine.API.Create for "functional-20220604152644-5712" (driver="docker")
	I0604 15:28:43.733999    7572 client.go:168] LocalClient.Create starting
	I0604 15:28:43.734643    7572 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 15:28:43.734901    7572 main.go:134] libmachine: Decoding PEM data...
	I0604 15:28:43.734901    7572 main.go:134] libmachine: Parsing certificate...
	I0604 15:28:43.734901    7572 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 15:28:43.734901    7572 main.go:134] libmachine: Decoding PEM data...
	I0604 15:28:43.734901    7572 main.go:134] libmachine: Parsing certificate...
	I0604 15:28:43.748727    7572 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:28:44.789873    7572 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:28:44.789873    7572 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.040802s)
	I0604 15:28:44.800546    7572 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
	I0604 15:28:44.800579    7572 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
	W0604 15:28:45.814714    7572 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
	I0604 15:28:45.814714    7572 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.0140981s)
	I0604 15:28:45.814714    7572 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220604152644-5712
	I0604 15:28:45.814812    7572 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220604152644-5712
	
	** /stderr **
	I0604 15:28:45.822405    7572 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 15:28:46.847753    7572 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0251704s)
	I0604 15:28:46.864098    7572 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000788208] misses:0}
	I0604 15:28:46.864265    7572 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:28:46.864383    7572 network_create.go:115] attempt to create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 15:28:46.871070    7572 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712
	W0604 15:28:47.876620    7572 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712 returned with exit code 1
	I0604 15:28:47.876844    7572 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: (1.0044572s)
	E0604 15:28:47.876974    7572 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.49.0/24: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 93b3880681dae54eb7fffc2b86b5a523a281986cf8003de2c326fbacdc525a96 (br-93b3880681da): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 15:28:47.877422    7572 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 93b3880681dae54eb7fffc2b86b5a523a281986cf8003de2c326fbacdc525a96 (br-93b3880681da): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 93b3880681dae54eb7fffc2b86b5a523a281986cf8003de2c326fbacdc525a96 (br-93b3880681da): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 15:28:47.891728    7572 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 15:28:48.951020    7572 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0567133s)
	I0604 15:28:48.959328    7572 cli_runner.go:164] Run: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 15:28:49.994395    7572 cli_runner.go:211] docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 15:28:49.994693    7572 cli_runner.go:217] Completed: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0350559s)
	I0604 15:28:49.994901    7572 client.go:171] LocalClient.Create took 6.2607322s
	I0604 15:28:52.013392    7572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:28:52.020365    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:28:53.046868    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:28:53.046868    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0264358s)
	I0604 15:28:53.046868    7572 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:53.227650    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:28:54.253466    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:28:54.253466    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0256441s)
	W0604 15:28:54.253961    7572 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:28:54.253961    7572 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:54.266164    7572 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:28:54.273273    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:28:55.313287    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:28:55.313455    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0400034s)
	I0604 15:28:55.313575    7572 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:55.525218    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:28:56.553706    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:28:56.553706    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0281619s)
	W0604 15:28:56.553706    7572 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:28:56.553706    7572 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:56.553706    7572 start.go:134] duration metric: createHost completed in 12.8239565s
	I0604 15:28:56.565662    7572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:28:56.573729    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:28:57.568763    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:28:57.569182    7572 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:57.907391    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:28:58.940071    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:28:58.940180    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0324768s)
	W0604 15:28:58.940333    7572 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:28:58.940333    7572 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:28:58.951064    7572 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:28:58.957067    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:28:59.971510    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:28:59.971684    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0144336s)
	I0604 15:28:59.971953    7572 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:00.203665    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:29:01.216297    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:01.216297    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0126223s)
	W0604 15:29:01.216297    7572 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:29:01.216297    7572 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:01.216297    7572 fix.go:57] fixHost completed within 48.3148303s
	I0604 15:29:01.216297    7572 start.go:81] releasing machines lock for "functional-20220604152644-5712", held for 48.3150407s
	W0604 15:29:01.216297    7572 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	W0604 15:29:01.216297    7572 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	I0604 15:29:01.216297    7572 start.go:614] Will try again in 5 seconds ...
	I0604 15:29:06.226179    7572 start.go:352] acquiring machines lock for functional-20220604152644-5712: {Name:mkd8e5d21c30b3e319f1bf6be936dc9c23190696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 15:29:06.226560    7572 start.go:356] acquired machines lock for "functional-20220604152644-5712" in 212.9µs
	I0604 15:29:06.226560    7572 start.go:94] Skipping create...Using existing machine configuration
	I0604 15:29:06.226560    7572 fix.go:55] fixHost starting: 
	I0604 15:29:06.246192    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:29:07.287277    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:29:07.287277    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0407792s)
	I0604 15:29:07.287277    7572 fix.go:103] recreateIfNeeded on functional-20220604152644-5712: state= err=unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:07.287277    7572 fix.go:108] machineExists: false. err=machine does not exist
	I0604 15:29:07.290492    7572 out.go:177] * docker "functional-20220604152644-5712" container is missing, will recreate.
	I0604 15:29:07.300586    7572 delete.go:124] DEMOLISHING functional-20220604152644-5712 ...
	I0604 15:29:07.315090    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:29:08.331000    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:29:08.331000    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0158993s)
	W0604 15:29:08.331000    7572 stop.go:75] unable to get state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:08.331000    7572 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:08.347782    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:29:09.366880    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:29:09.366880    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0190883s)
	I0604 15:29:09.366880    7572 delete.go:82] Unable to get host status for functional-20220604152644-5712, assuming it has already been deleted: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:09.378416    7572 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
	W0604 15:29:10.415530    7572 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:10.415589    7572 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.037063s)
	I0604 15:29:10.415589    7572 kic.go:356] could not find the container functional-20220604152644-5712 to remove it. will try anyways
	I0604 15:29:10.422878    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:29:11.453357    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:29:11.453357    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0303903s)
	W0604 15:29:11.453683    7572 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:11.464068    7572 cli_runner.go:164] Run: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0"
	W0604 15:29:12.472251    7572 cli_runner.go:211] docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 15:29:12.472459    7572 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": (1.0077142s)
	I0604 15:29:12.472459    7572 oci.go:625] error shutdown functional-20220604152644-5712: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:13.499226    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:29:14.533300    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:29:14.533499    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0340641s)
	I0604 15:29:14.533600    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:14.533600    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:29:14.533642    7572 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:15.041925    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:29:16.114729    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:29:16.114761    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0727391s)
	I0604 15:29:16.114968    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:16.114997    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:29:16.115069    7572 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:16.712018    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:29:17.736852    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:29:17.736852    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0248236s)
	I0604 15:29:17.736852    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:17.736852    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:29:17.736852    7572 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:18.649412    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:29:19.659841    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:29:19.659870    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0101356s)
	I0604 15:29:19.659981    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:19.660024    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:29:19.660050    7572 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:21.668055    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:29:22.679115    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:29:22.679115    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0110505s)
	I0604 15:29:22.679115    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:22.679115    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:29:22.679115    7572 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:24.511424    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:29:25.542878    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:29:25.542878    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0314442s)
	I0604 15:29:25.542878    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:25.542878    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:29:25.542878    7572 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:28.229315    7572 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:29:29.262957    7572 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:29:29.263204    7572 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0336319s)
	I0604 15:29:29.263319    7572 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:29.263358    7572 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:29:29.263427    7572 oci.go:88] couldn't shut down functional-20220604152644-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	 
	I0604 15:29:29.270546    7572 cli_runner.go:164] Run: docker rm -f -v functional-20220604152644-5712
	I0604 15:29:30.300584    7572 cli_runner.go:217] Completed: docker rm -f -v functional-20220604152644-5712: (1.0300277s)
	I0604 15:29:30.308315    7572 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
	W0604 15:29:31.276473    7572 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:31.285510    7572 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:29:32.301123    7572 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:29:32.301123    7572 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0154361s)
	I0604 15:29:32.309073    7572 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
	I0604 15:29:32.309073    7572 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
	W0604 15:29:33.298852    7572 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:33.298852    7572 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220604152644-5712
	I0604 15:29:33.298852    7572 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220604152644-5712
	
	** /stderr **
	W0604 15:29:33.299802    7572 delete.go:139] delete failed (probably ok) <nil>
	I0604 15:29:33.299802    7572 fix.go:115] Sleeping 1 second for extra luck!
	I0604 15:29:34.302544    7572 start.go:131] createHost starting for "" (driver="docker")
	I0604 15:29:34.305981    7572 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0604 15:29:34.306149    7572 start.go:165] libmachine.API.Create for "functional-20220604152644-5712" (driver="docker")
	I0604 15:29:34.306371    7572 client.go:168] LocalClient.Create starting
	I0604 15:29:34.306809    7572 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 15:29:34.307087    7572 main.go:134] libmachine: Decoding PEM data...
	I0604 15:29:34.307177    7572 main.go:134] libmachine: Parsing certificate...
	I0604 15:29:34.307385    7572 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 15:29:34.307483    7572 main.go:134] libmachine: Decoding PEM data...
	I0604 15:29:34.307483    7572 main.go:134] libmachine: Parsing certificate...
	I0604 15:29:34.317036    7572 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:29:35.336358    7572 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:29:35.336358    7572 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0193117s)
	I0604 15:29:35.345220    7572 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
	I0604 15:29:35.345220    7572 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
	W0604 15:29:36.360702    7572 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:36.360702    7572 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.0154723s)
	I0604 15:29:36.360702    7572 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220604152644-5712
	I0604 15:29:36.360702    7572 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220604152644-5712
	
	** /stderr **
	I0604 15:29:36.370501    7572 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 15:29:37.384957    7572 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000788208] amended:false}} dirty:map[] misses:0}
	I0604 15:29:37.384957    7572 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:29:37.399135    7572 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000788208] amended:true}} dirty:map[192.168.49.0:0xc000788208 192.168.58.0:0xc000110130] misses:0}
	I0604 15:29:37.399135    7572 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:29:37.399135    7572 network_create.go:115] attempt to create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 15:29:37.409055    7572 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712
	W0604 15:29:38.433108    7572 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:38.433108    7572 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: (1.0240427s)
	E0604 15:29:38.433108    7572 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.58.0/24: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network feab8d847522dfd5a5788c1fd86be841c17ac3df8151d0dedc000ebfcb4ebbae (br-feab8d847522): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 15:29:38.433108    7572 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network feab8d847522dfd5a5788c1fd86be841c17ac3df8151d0dedc000ebfcb4ebbae (br-feab8d847522): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network feab8d847522dfd5a5788c1fd86be841c17ac3df8151d0dedc000ebfcb4ebbae (br-feab8d847522): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 15:29:38.451058    7572 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 15:29:39.497532    7572 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0464641s)
	I0604 15:29:39.506012    7572 cli_runner.go:164] Run: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 15:29:40.497198    7572 cli_runner.go:211] docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 15:29:40.497198    7572 client.go:171] LocalClient.Create took 6.1907645s
	I0604 15:29:42.519527    7572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:29:42.527768    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:29:43.551922    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:43.552002    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0240174s)
	I0604 15:29:43.552151    7572 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:43.842961    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:29:44.865902    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:44.865902    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.02293s)
	W0604 15:29:44.865902    7572 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:29:44.865902    7572 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:44.877197    7572 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:29:44.883858    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:29:45.927673    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:45.927764    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0435957s)
	I0604 15:29:45.927791    7572 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:46.142926    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:29:47.167783    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:47.167783    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0248471s)
	W0604 15:29:47.167783    7572 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:29:47.167783    7572 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:47.167783    7572 start.go:134] duration metric: createHost completed in 12.86511s
	I0604 15:29:47.179181    7572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:29:47.185925    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:29:48.184708    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:48.184708    7572 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:48.510532    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:29:49.503900    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	W0604 15:29:49.503900    7572 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:29:49.503900    7572 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:49.517370    7572 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:29:49.523941    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:29:50.536703    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:50.536770    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0124389s)
	I0604 15:29:50.536908    7572 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:50.900976    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:29:51.948507    7572 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:29:51.948507    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0472984s)
	W0604 15:29:51.948507    7572 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:29:51.948507    7572 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:29:51.948507    7572 fix.go:57] fixHost completed within 45.7214897s
	I0604 15:29:51.948507    7572 start.go:81] releasing machines lock for "functional-20220604152644-5712", held for 45.7214897s
	W0604 15:29:51.949234    7572 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220604152644-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p functional-20220604152644-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	I0604 15:29:51.953446    7572 out.go:177] 
	W0604 15:29:51.955858    7572 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	W0604 15:29:51.955858    7572 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 15:29:51.955858    7572 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 15:29:51.959555    7572 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:653: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --alsologtostderr -v=8": exit status 60
functional_test.go:655: soft start took 1m49.6018976s for "functional-20220604152644-5712" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.0928516s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (2.7990671s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:29:56.091389    4124 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/SoftStart (113.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
functional_test.go:673: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (312.8743ms)

                                                
                                                
** stderr ** 
	W0604 15:29:56.358415    1728 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:675: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:679: expected current-context = "functional-20220604152644-5712", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubeContext]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.0621264s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (2.7932019s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:30:00.274814    3612 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubeContext (4.18s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220604152644-5712 get po -A
functional_test.go:688: (dbg) Non-zero exit: kubectl --context functional-20220604152644-5712 get po -A: exit status 1 (284.226ms)

                                                
                                                
** stderr ** 
	W0604 15:30:00.522674    8948 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:690: failed to get kubectl pods: args "kubectl --context functional-20220604152644-5712 get po -A" : exit status 1
functional_test.go:694: expected stderr to be empty but got *"W0604 15:30:00.522674    8948 loader.go:223] Config not found: C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig\nError in configuration: \n* context was not found for specified context: functional-20220604152644-5712\n* cluster has no server defined\n"*: args "kubectl --context functional-20220604152644-5712 get po -A"
functional_test.go:697: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-20220604152644-5712 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.1192192s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (2.9176285s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:30:04.613952    7664 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubectlGetPods (4.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh sudo crictl images
functional_test.go:1116: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh sudo crictl images: exit status 80 (3.029918s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f40552ee918ac053c4c404bc1ee7532c196ce64c_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1118: failed to get images by "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh sudo crictl images" ssh exit status 80
functional_test.go:1122: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f40552ee918ac053c4c404bc1ee7532c196ce64c_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (12.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1139: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh sudo docker rmi k8s.gcr.io/pause:latest: exit status 80 (3.0510353s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_695159ccd5e0da3f5d811f2823eb9163b9dc65a6_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1142: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh sudo docker rmi k8s.gcr.io/pause:latest" : exit status 80
functional_test.go:1145: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (3.0656706s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_90c12c9ea894b73e3971aa1ec67d0a7aeefe0b8f_2.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cache reload: (2.950229s)
functional_test.go:1155: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (3.0232994s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_90c12c9ea894b73e3971aa1ec67d0a7aeefe0b8f_2.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1157: expected "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh sudo crictl inspecti k8s.gcr.io/pause:latest" to run successfully but got error: exit status 80
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (12.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (5.91s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 kubectl -- --context functional-20220604152644-5712 get pods
functional_test.go:708: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 kubectl -- --context functional-20220604152644-5712 get pods: exit status 1 (2.0021454s)

                                                
                                                
** stderr ** 
	W0604 15:30:38.728176    3548 loader.go:221] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* no server found for cluster "functional-20220604152644-5712"

                                                
                                                
** /stderr **
functional_test.go:711: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 kubectl -- --context functional-20220604152644-5712 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.0827486s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (2.8109026s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:30:42.710803    6448 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (5.91s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (5.86s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out\kubectl.exe --context functional-20220604152644-5712 get pods
functional_test.go:733: (dbg) Non-zero exit: out\kubectl.exe --context functional-20220604152644-5712 get pods: exit status 1 (1.9057183s)

                                                
                                                
** stderr ** 
	W0604 15:30:44.549468    7820 loader.go:221] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* no server found for cluster "functional-20220604152644-5712"

                                                
                                                
** /stderr **
functional_test.go:736: failed to run kubectl directly. args "out\\kubectl.exe --context functional-20220604152644-5712 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.0808506s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (2.8552337s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:30:48.565710    4156 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (5.86s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (113.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 60 (1m49.1940386s)

                                                
                                                
-- stdout --
	* [functional-20220604152644-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20220604152644-5712 in cluster functional-20220604152644-5712
	* Pulling base image ...
	* docker "functional-20220604152644-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20220604152644-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:31:33.805980    5556 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.49.0/24: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9bfadd0a80293c3c78f22ea5b509a7746c5f7d1c88db7221afd3e8629b403631 (br-9bfadd0a8029): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9bfadd0a80293c3c78f22ea5b509a7746c5f7d1c88db7221afd3e8629b403631 (br-9bfadd0a8029): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	E0604 15:32:24.218824    5556 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.58.0/24: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 743cfe80c21763d426a1586060c77304005966760530933da4fbbe90bb4e37db (br-743cfe80c217): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 743cfe80c21763d426a1586060c77304005966760530933da4fbbe90bb4e37db (br-743cfe80c217): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p functional-20220604152644-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
functional_test.go:751: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 60
functional_test.go:753: restart took 1m49.194933s for "functional-20220604152644-5712" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.0951726s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (2.7828525s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:32:41.651727    8388 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ExtraConfig (113.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220604152644-5712 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:802: (dbg) Non-zero exit: kubectl --context functional-20220604152644-5712 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (282.5565ms)

                                                
                                                
** stderr ** 
	W0604 15:32:41.900039    8224 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220604152644-5712" does not exist

                                                
                                                
** /stderr **
functional_test.go:804: failed to get components. args "kubectl --context functional-20220604152644-5712 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.0727943s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (2.775219s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:32:45.796665    1144 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ComponentHealth (4.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 logs
functional_test.go:1228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 logs: exit status 80 (3.1715523s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                Args                 |               Profile               |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
	| delete  | --all                               | download-only-20220604151954-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:20 GMT | 04 Jun 22 15:20 GMT |
	| delete  | -p                                  | download-only-20220604151954-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:20 GMT | 04 Jun 22 15:20 GMT |
	|         | download-only-20220604151954-5712   |                                     |                   |                |                     |                     |
	| delete  | -p                                  | download-only-20220604151954-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:20 GMT | 04 Jun 22 15:20 GMT |
	|         | download-only-20220604151954-5712   |                                     |                   |                |                     |                     |
	| delete  | -p                                  | download-docker-20220604152059-5712 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:21 GMT | 04 Jun 22 15:21 GMT |
	|         | download-docker-20220604152059-5712 |                                     |                   |                |                     |                     |
	| delete  | -p                                  | binary-mirror-20220604152145-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:21 GMT | 04 Jun 22 15:22 GMT |
	|         | binary-mirror-20220604152145-5712   |                                     |                   |                |                     |                     |
	| delete  | -p addons-20220604152202-5712       | addons-20220604152202-5712          | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:23 GMT | 04 Jun 22 15:23 GMT |
	| delete  | -p nospam-20220604152324-5712       | nospam-20220604152324-5712          | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:26 GMT | 04 Jun 22 15:26 GMT |
	| cache   | functional-20220604152644-5712      | functional-20220604152644-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	|         | cache add k8s.gcr.io/pause:3.1      |                                     |                   |                |                     |                     |
	| cache   | functional-20220604152644-5712      | functional-20220604152644-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	|         | cache add k8s.gcr.io/pause:3.3      |                                     |                   |                |                     |                     |
	| cache   | functional-20220604152644-5712      | functional-20220604152644-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	|         | cache add                           |                                     |                   |                |                     |                     |
	|         | k8s.gcr.io/pause:latest             |                                     |                   |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.3         | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	| cache   | list                                | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	| cache   | functional-20220604152644-5712      | functional-20220604152644-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	|         | cache reload                        |                                     |                   |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.1         | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	| cache   | delete k8s.gcr.io/pause:latest      | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/04 15:30:48
	Running on machine: minikube2
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 15:30:48.822616    5556 out.go:296] Setting OutFile to fd 636 ...
	I0604 15:30:48.877033    5556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:30:48.877033    5556 out.go:309] Setting ErrFile to fd 972...
	I0604 15:30:48.877089    5556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:30:48.888979    5556 out.go:303] Setting JSON to false
	I0604 15:30:48.891643    5556 start.go:115] hostinfo: {"hostname":"minikube2","uptime":7720,"bootTime":1654348928,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 15:30:48.891643    5556 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 15:30:48.895748    5556 out.go:177] * [functional-20220604152644-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 15:30:48.900035    5556 notify.go:193] Checking for updates...
	I0604 15:30:48.902110    5556 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 15:30:48.904922    5556 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 15:30:48.907495    5556 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 15:30:48.909980    5556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 15:30:48.913591    5556 config.go:178] Loaded profile config "functional-20220604152644-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 15:30:48.913838    5556 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 15:30:51.415997    5556 docker.go:137] docker version: linux-20.10.16
	I0604 15:30:51.424911    5556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:30:53.352092    5556 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9271617s)
	I0604 15:30:53.352092    5556 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-04 15:30:52.3902581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:30:53.360094    5556 out.go:177] * Using the docker driver based on existing profile
	I0604 15:30:53.360094    5556 start.go:284] selected driver: docker
	I0604 15:30:53.360094    5556 start.go:806] validating driver "docker" against &{Name:functional-20220604152644-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220604152644-5712 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 15:30:53.360094    5556 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 15:30:53.382917    5556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:30:55.338951    5556 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9558143s)
	I0604 15:30:55.339196    5556 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-04 15:30:54.3823936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:30:55.399237    5556 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 15:30:55.399237    5556 cni.go:95] Creating CNI manager for ""
	I0604 15:30:55.399237    5556 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 15:30:55.399237    5556 start_flags.go:306] config:
	{Name:functional-20220604152644-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220604152644-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 15:30:55.404611    5556 out.go:177] * Starting control plane node functional-20220604152644-5712 in cluster functional-20220604152644-5712
	I0604 15:30:55.406541    5556 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 15:30:55.408569    5556 out.go:177] * Pulling base image ...
	I0604 15:30:55.412266    5556 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 15:30:55.412266    5556 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 15:30:55.412266    5556 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 15:30:55.412266    5556 cache.go:57] Caching tarball of preloaded images
	I0604 15:30:55.412266    5556 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 15:30:55.412266    5556 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 15:30:55.413196    5556 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220604152644-5712\config.json ...
	I0604 15:30:56.421671    5556 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 15:30:56.422168    5556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:30:56.422275    5556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:30:56.422275    5556 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 15:30:56.422275    5556 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 15:30:56.422275    5556 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 15:30:56.422892    5556 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 15:30:56.422892    5556 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 15:30:56.422892    5556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:30:58.630301    5556 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 15:30:58.630301    5556 cache.go:206] Successfully downloaded all kic artifacts
	I0604 15:30:58.630301    5556 start.go:352] acquiring machines lock for functional-20220604152644-5712: {Name:mkd8e5d21c30b3e319f1bf6be936dc9c23190696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 15:30:58.630866    5556 start.go:356] acquired machines lock for "functional-20220604152644-5712" in 565µs
	I0604 15:30:58.631246    5556 start.go:94] Skipping create...Using existing machine configuration
	I0604 15:30:58.631369    5556 fix.go:55] fixHost starting: 
	I0604 15:30:58.646862    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:30:59.668919    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:30:59.668919    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0220461s)
	I0604 15:30:59.668919    5556 fix.go:103] recreateIfNeeded on functional-20220604152644-5712: state= err=unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:30:59.668919    5556 fix.go:108] machineExists: false. err=machine does not exist
	I0604 15:30:59.674682    5556 out.go:177] * docker "functional-20220604152644-5712" container is missing, will recreate.
	I0604 15:30:59.677653    5556 delete.go:124] DEMOLISHING functional-20220604152644-5712 ...
	I0604 15:30:59.690645    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:00.716522    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:00.716522    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0258669s)
	W0604 15:31:00.716522    5556 stop.go:75] unable to get state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:00.716522    5556 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:00.733051    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:01.746737    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:01.746737    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0136755s)
	I0604 15:31:01.746737    5556 delete.go:82] Unable to get host status for functional-20220604152644-5712, assuming it has already been deleted: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:01.753490    5556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
	W0604 15:31:02.781964    5556 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:02.781964    5556 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0284636s)
	I0604 15:31:02.781964    5556 kic.go:356] could not find the container functional-20220604152644-5712 to remove it. will try anyways
	I0604 15:31:02.790941    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:03.829249    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:03.829279    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0380485s)
	W0604 15:31:03.829391    5556 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:03.839208    5556 cli_runner.go:164] Run: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0"
	W0604 15:31:04.899079    5556 cli_runner.go:211] docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 15:31:04.899079    5556 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": (1.0598033s)
	I0604 15:31:04.899079    5556 oci.go:625] error shutdown functional-20220604152644-5712: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:05.922276    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:06.964265    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:06.964265    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.041979s)
	I0604 15:31:06.964265    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:06.964265    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:31:06.964265    5556 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:07.535971    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:08.557603    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:08.557815    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0216217s)
	I0604 15:31:08.557815    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:08.557815    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:31:08.557910    5556 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:09.650639    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:10.678902    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:10.678976    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.028253s)
	I0604 15:31:10.679046    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:10.679046    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:31:10.679119    5556 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:12.009129    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:12.996529    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:12.996529    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:12.996529    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:31:12.996529    5556 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:14.590588    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:15.627998    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:15.627998    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0373994s)
	I0604 15:31:15.627998    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:15.627998    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:31:15.627998    5556 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:17.980904    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:19.004569    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:19.004569    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0236546s)
	I0604 15:31:19.004569    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:19.004569    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:31:19.004569    5556 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:23.535981    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:24.527488    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:24.527488    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:24.527488    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:31:24.527488    5556 oci.go:88] couldn't shut down functional-20220604152644-5712 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	 
	I0604 15:31:24.536258    5556 cli_runner.go:164] Run: docker rm -f -v functional-20220604152644-5712
	I0604 15:31:25.541131    5556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
	W0604 15:31:26.587804    5556 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:26.587904    5556 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0464683s)
	I0604 15:31:26.595750    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:31:27.621336    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:31:27.621336    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0255759s)
	I0604 15:31:27.628127    5556 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
	I0604 15:31:27.628127    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
	W0604 15:31:28.662956    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:28.662956    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.0348189s)
	I0604 15:31:28.662956    5556 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220604152644-5712
	I0604 15:31:28.662956    5556 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220604152644-5712
	
	** /stderr **
	W0604 15:31:28.664305    5556 delete.go:139] delete failed (probably ok) <nil>
	I0604 15:31:28.664305    5556 fix.go:115] Sleeping 1 second for extra luck!
	I0604 15:31:29.669267    5556 start.go:131] createHost starting for "" (driver="docker")
	I0604 15:31:29.673660    5556 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0604 15:31:29.673920    5556 start.go:165] libmachine.API.Create for "functional-20220604152644-5712" (driver="docker")
	I0604 15:31:29.673994    5556 client.go:168] LocalClient.Create starting
	I0604 15:31:29.674662    5556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 15:31:29.674850    5556 main.go:134] libmachine: Decoding PEM data...
	I0604 15:31:29.674907    5556 main.go:134] libmachine: Parsing certificate...
	I0604 15:31:29.675174    5556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 15:31:29.675368    5556 main.go:134] libmachine: Decoding PEM data...
	I0604 15:31:29.675368    5556 main.go:134] libmachine: Parsing certificate...
	I0604 15:31:29.684088    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:31:30.703966    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:31:30.703966    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.01967s)
	I0604 15:31:30.711759    5556 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
	I0604 15:31:30.711759    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
	W0604 15:31:31.740076    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:31.740076    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.0283068s)
	I0604 15:31:31.740076    5556 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220604152644-5712
	I0604 15:31:31.740076    5556 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220604152644-5712
	
	** /stderr **
	I0604 15:31:31.747903    5556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 15:31:32.756934    5556 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0088405s)
	I0604 15:31:32.785386    5556 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00078c940] misses:0}
	I0604 15:31:32.785837    5556 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:31:32.785837    5556 network_create.go:115] attempt to create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 15:31:32.793066    5556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712
	W0604 15:31:33.805980    5556 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:33.805980    5556 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: (1.0129035s)
	E0604 15:31:33.805980    5556 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.49.0/24: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9bfadd0a80293c3c78f22ea5b509a7746c5f7d1c88db7221afd3e8629b403631 (br-9bfadd0a8029): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 15:31:33.805980    5556 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9bfadd0a80293c3c78f22ea5b509a7746c5f7d1c88db7221afd3e8629b403631 (br-9bfadd0a8029): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 15:31:33.820421    5556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 15:31:34.837392    5556 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0169611s)
	I0604 15:31:34.847196    5556 cli_runner.go:164] Run: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 15:31:35.869493    5556 cli_runner.go:211] docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 15:31:35.869493    5556 cli_runner.go:217] Completed: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0222861s)
	I0604 15:31:35.869493    5556 client.go:171] LocalClient.Create took 6.1954357s
	I0604 15:31:37.890920    5556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:31:37.896927    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:31:38.908668    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:38.908668    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0117305s)
	I0604 15:31:38.908668    5556 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:39.094086    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:31:40.103428    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:40.103428    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0093312s)
	W0604 15:31:40.103428    5556 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:31:40.103428    5556 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:40.113492    5556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:31:40.119819    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:31:41.149827    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:41.149827    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0298269s)
	I0604 15:31:41.149966    5556 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:41.363970    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:31:42.372315    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:42.372315    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0081432s)
	W0604 15:31:42.372613    5556 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:31:42.372613    5556 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:42.372613    5556 start.go:134] duration metric: createHost completed in 12.7032179s
	I0604 15:31:42.381937    5556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:31:42.387937    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:31:43.428496    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:43.428496    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0404418s)
	I0604 15:31:43.428496    5556 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:43.777406    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:31:44.805121    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:44.805173    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0273656s)
	W0604 15:31:44.805173    5556 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:31:44.805173    5556 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:44.815106    5556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:31:44.821027    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:31:45.851220    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:45.851253    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0300291s)
	I0604 15:31:45.851253    5556 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:46.082396    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:31:47.081532    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	W0604 15:31:47.081532    5556 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:31:47.081532    5556 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:47.081532    5556 fix.go:57] fixHost completed within 48.449699s
	I0604 15:31:47.081532    5556 start.go:81] releasing machines lock for "functional-20220604152644-5712", held for 48.4501768s
	W0604 15:31:47.081532    5556 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	W0604 15:31:47.082253    5556 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	I0604 15:31:47.082297    5556 start.go:614] Will try again in 5 seconds ...
	I0604 15:31:52.094081    5556 start.go:352] acquiring machines lock for functional-20220604152644-5712: {Name:mkd8e5d21c30b3e319f1bf6be936dc9c23190696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 15:31:52.094570    5556 start.go:356] acquired machines lock for "functional-20220604152644-5712" in 0s
	I0604 15:31:52.094570    5556 start.go:94] Skipping create...Using existing machine configuration
	I0604 15:31:52.094570    5556 fix.go:55] fixHost starting: 
	I0604 15:31:52.108126    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:53.112277    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:53.112328    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0038835s)
	I0604 15:31:53.112328    5556 fix.go:103] recreateIfNeeded on functional-20220604152644-5712: state= err=unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:53.112393    5556 fix.go:108] machineExists: false. err=machine does not exist
	I0604 15:31:53.115717    5556 out.go:177] * docker "functional-20220604152644-5712" container is missing, will recreate.
	I0604 15:31:53.118909    5556 delete.go:124] DEMOLISHING functional-20220604152644-5712 ...
	I0604 15:31:53.131759    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:54.161977    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:54.162157    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0302074s)
	W0604 15:31:54.162217    5556 stop.go:75] unable to get state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:54.162217    5556 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:54.177001    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:55.188835    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:31:55.188884    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0117238s)
	I0604 15:31:55.188949    5556 delete.go:82] Unable to get host status for functional-20220604152644-5712, assuming it has already been deleted: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:55.196561    5556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
	W0604 15:31:56.230116    5556 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
	I0604 15:31:56.230116    5556 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0333506s)
	I0604 15:31:56.230350    5556 kic.go:356] could not find the container functional-20220604152644-5712 to remove it. will try anyways
	I0604 15:31:56.237898    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:31:57.232301    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	W0604 15:31:57.232301    5556 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:57.239785    5556 cli_runner.go:164] Run: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0"
	W0604 15:31:58.237484    5556 cli_runner.go:211] docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 15:31:58.237484    5556 oci.go:625] error shutdown functional-20220604152644-5712: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:31:59.257274    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:32:00.280199    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:32:00.280199    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0229149s)
	I0604 15:32:00.280199    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:00.280199    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:32:00.280199    5556 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:00.782772    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:32:01.790891    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:32:01.790891    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0081089s)
	I0604 15:32:01.790891    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:01.790891    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:32:01.790891    5556 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:02.389363    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:32:03.428979    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:32:03.428979    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0396055s)
	I0604 15:32:03.428979    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:03.428979    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:32:03.428979    5556 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:04.337425    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:32:05.378305    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:32:05.378305    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.04063s)
	I0604 15:32:05.378305    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:05.378305    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:32:05.378305    5556 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:07.386110    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:32:08.394144    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:32:08.394144    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0078658s)
	I0604 15:32:08.394221    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:08.394221    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:32:08.394221    5556 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:10.232873    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:32:11.262583    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:32:11.262583    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0295905s)
	I0604 15:32:11.262798    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:11.262798    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:32:11.262856    5556 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:13.947004    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:32:14.975561    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:32:14.975583    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0283586s)
	I0604 15:32:14.975737    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:14.975737    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
	I0604 15:32:14.975737    5556 oci.go:88] couldn't shut down functional-20220604152644-5712 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	 
	I0604 15:32:14.982820    5556 cli_runner.go:164] Run: docker rm -f -v functional-20220604152644-5712
	I0604 15:32:16.009593    5556 cli_runner.go:217] Completed: docker rm -f -v functional-20220604152644-5712: (1.0267621s)
	I0604 15:32:16.015579    5556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
	W0604 15:32:17.055206    5556 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
	I0604 15:32:17.055206    5556 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0396165s)
	I0604 15:32:17.062643    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:32:18.074003    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:32:18.074003    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.01135s)
	I0604 15:32:18.081042    5556 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
	I0604 15:32:18.081042    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
	W0604 15:32:19.101825    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
	I0604 15:32:19.101825    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.020662s)
	I0604 15:32:19.101906    5556 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220604152644-5712
	I0604 15:32:19.101906    5556 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220604152644-5712
	
	** /stderr **
	W0604 15:32:19.102807    5556 delete.go:139] delete failed (probably ok) <nil>
	I0604 15:32:19.102807    5556 fix.go:115] Sleeping 1 second for extra luck!
	I0604 15:32:20.117316    5556 start.go:131] createHost starting for "" (driver="docker")
	I0604 15:32:20.122026    5556 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0604 15:32:20.122379    5556 start.go:165] libmachine.API.Create for "functional-20220604152644-5712" (driver="docker")
	I0604 15:32:20.122379    5556 client.go:168] LocalClient.Create starting
	I0604 15:32:20.122379    5556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 15:32:20.122952    5556 main.go:134] libmachine: Decoding PEM data...
	I0604 15:32:20.123096    5556 main.go:134] libmachine: Parsing certificate...
	I0604 15:32:20.123122    5556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 15:32:20.123122    5556 main.go:134] libmachine: Decoding PEM data...
	I0604 15:32:20.123122    5556 main.go:134] libmachine: Parsing certificate...
	I0604 15:32:20.132723    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:32:21.133703    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:32:21.133751    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0007917s)
	I0604 15:32:21.141371    5556 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
	I0604 15:32:21.141371    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
	W0604 15:32:22.181926    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
	I0604 15:32:22.181926    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.0404367s)
	I0604 15:32:22.182086    5556 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220604152644-5712
	I0604 15:32:22.182138    5556 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220604152644-5712
	
	** /stderr **
	I0604 15:32:22.189632    5556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 15:32:23.201455    5556 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c940] amended:false}} dirty:map[] misses:0}
	I0604 15:32:23.201455    5556 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:32:23.222020    5556 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c940] amended:true}} dirty:map[192.168.49.0:0xc00078c940 192.168.58.0:0xc00078cb20] misses:0}
	I0604 15:32:23.222020    5556 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:32:23.222020    5556 network_create.go:115] attempt to create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 15:32:23.228673    5556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712
	W0604 15:32:24.218824    5556 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712 returned with exit code 1
	E0604 15:32:24.218824    5556 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.58.0/24: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 743cfe80c21763d426a1586060c77304005966760530933da4fbbe90bb4e37db (br-743cfe80c217): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 15:32:24.218824    5556 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 743cfe80c21763d426a1586060c77304005966760530933da4fbbe90bb4e37db (br-743cfe80c217): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 15:32:24.232969    5556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 15:32:25.255233    5556 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0222545s)
	I0604 15:32:25.263419    5556 cli_runner.go:164] Run: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 15:32:26.270514    5556 cli_runner.go:211] docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 15:32:26.270514    5556 cli_runner.go:217] Completed: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0069813s)
	I0604 15:32:26.270514    5556 client.go:171] LocalClient.Create took 6.1480724s
	I0604 15:32:28.287075    5556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:32:28.294316    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:32:29.314489    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:32:29.314489    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0198984s)
	I0604 15:32:29.314814    5556 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:29.606965    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:32:30.637327    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:32:30.637491    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0303509s)
	W0604 15:32:30.637753    5556 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:32:30.637753    5556 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:30.647917    5556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:32:30.653574    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:32:31.669080    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:32:31.669080    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0154957s)
	I0604 15:32:31.669080    5556 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:31.887985    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:32:32.870208    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	W0604 15:32:32.870550    5556 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:32:32.870589    5556 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:32.870614    5556 start.go:134] duration metric: createHost completed in 12.7531682s
	I0604 15:32:32.880218    5556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:32:32.886181    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:32:33.945699    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:32:33.945699    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0595077s)
	I0604 15:32:33.945699    5556 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:34.269900    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:32:35.323728    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:32:35.323728    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0535464s)
	W0604 15:32:35.323870    5556 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:32:35.323870    5556 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:35.333519    5556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:32:35.341484    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:32:36.364546    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:32:36.364546    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0228472s)
	I0604 15:32:36.364779    5556 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:36.719003    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
	W0604 15:32:37.746424    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
	I0604 15:32:37.746424    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0272252s)
	W0604 15:32:37.746623    5556 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:32:37.746623    5556 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	I0604 15:32:37.746623    5556 fix.go:57] fixHost completed within 45.6515903s
	I0604 15:32:37.746623    5556 start.go:81] releasing machines lock for "functional-20220604152644-5712", held for 45.6515903s
	W0604 15:32:37.747310    5556 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220604152644-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	I0604 15:32:37.752025    5556 out.go:177] 
	W0604 15:32:37.754122    5556 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
	
	W0604 15:32:37.754122    5556 out.go:239] * Suggestion: Restart Docker
	W0604 15:32:37.754122    5556 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 15:32:37.756791    5556 out.go:177] 
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_754.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1230: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 logs failed: exit status 80
functional_test.go:1220: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
| Command |                Args                 |               Profile               |       User        |    Version     |     Start Time      |      End Time       |
|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
| delete  | --all                               | download-only-20220604151954-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:20 GMT | 04 Jun 22 15:20 GMT |
| delete  | -p                                  | download-only-20220604151954-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:20 GMT | 04 Jun 22 15:20 GMT |
|         | download-only-20220604151954-5712   |                                     |                   |                |                     |                     |
| delete  | -p                                  | download-only-20220604151954-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:20 GMT | 04 Jun 22 15:20 GMT |
|         | download-only-20220604151954-5712   |                                     |                   |                |                     |                     |
| delete  | -p                                  | download-docker-20220604152059-5712 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:21 GMT | 04 Jun 22 15:21 GMT |
|         | download-docker-20220604152059-5712 |                                     |                   |                |                     |                     |
| delete  | -p                                  | binary-mirror-20220604152145-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:21 GMT | 04 Jun 22 15:22 GMT |
|         | binary-mirror-20220604152145-5712   |                                     |                   |                |                     |                     |
| delete  | -p addons-20220604152202-5712       | addons-20220604152202-5712          | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:23 GMT | 04 Jun 22 15:23 GMT |
| delete  | -p nospam-20220604152324-5712       | nospam-20220604152324-5712          | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:26 GMT | 04 Jun 22 15:26 GMT |
| cache   | functional-20220604152644-5712      | functional-20220604152644-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
|         | cache add k8s.gcr.io/pause:3.1      |                                     |                   |                |                     |                     |
| cache   | functional-20220604152644-5712      | functional-20220604152644-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
|         | cache add k8s.gcr.io/pause:3.3      |                                     |                   |                |                     |                     |
| cache   | functional-20220604152644-5712      | functional-20220604152644-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
|         | cache add                           |                                     |                   |                |                     |                     |
|         | k8s.gcr.io/pause:latest             |                                     |                   |                |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.3         | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
| cache   | list                                | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
| cache   | functional-20220604152644-5712      | functional-20220604152644-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
|         | cache reload                        |                                     |                   |                |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.1         | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
| cache   | delete k8s.gcr.io/pause:latest      | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2022/06/04 15:30:48
Running on machine: minikube2
Binary: Built with gc go1.18.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0604 15:30:48.822616    5556 out.go:296] Setting OutFile to fd 636 ...
I0604 15:30:48.877033    5556 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0604 15:30:48.877033    5556 out.go:309] Setting ErrFile to fd 972...
I0604 15:30:48.877089    5556 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0604 15:30:48.888979    5556 out.go:303] Setting JSON to false
I0604 15:30:48.891643    5556 start.go:115] hostinfo: {"hostname":"minikube2","uptime":7720,"bootTime":1654348928,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W0604 15:30:48.891643    5556 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0604 15:30:48.895748    5556 out.go:177] * [functional-20220604152644-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
I0604 15:30:48.900035    5556 notify.go:193] Checking for updates...
I0604 15:30:48.902110    5556 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I0604 15:30:48.904922    5556 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I0604 15:30:48.907495    5556 out.go:177]   - MINIKUBE_LOCATION=14123
I0604 15:30:48.909980    5556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0604 15:30:48.913591    5556 config.go:178] Loaded profile config "functional-20220604152644-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0604 15:30:48.913838    5556 driver.go:358] Setting default libvirt URI to qemu:///system
I0604 15:30:51.415997    5556 docker.go:137] docker version: linux-20.10.16
I0604 15:30:51.424911    5556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0604 15:30:53.352092    5556 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9271617s)
I0604 15:30:53.352092    5556 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-04 15:30:52.3902581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0604 15:30:53.360094    5556 out.go:177] * Using the docker driver based on existing profile
I0604 15:30:53.360094    5556 start.go:284] selected driver: docker
I0604 15:30:53.360094    5556 start.go:806] validating driver "docker" against &{Name:functional-20220604152644-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220604152644-5712 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false}
I0604 15:30:53.360094    5556 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0604 15:30:53.382917    5556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0604 15:30:55.338951    5556 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9558143s)
I0604 15:30:55.339196    5556 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-04 15:30:54.3823936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0604 15:30:55.399237    5556 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0604 15:30:55.399237    5556 cni.go:95] Creating CNI manager for ""
I0604 15:30:55.399237    5556 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0604 15:30:55.399237    5556 start_flags.go:306] config:
{Name:functional-20220604152644-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220604152644-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false}
I0604 15:30:55.404611    5556 out.go:177] * Starting control plane node functional-20220604152644-5712 in cluster functional-20220604152644-5712
I0604 15:30:55.406541    5556 cache.go:120] Beginning downloading kic base image for docker with docker
I0604 15:30:55.408569    5556 out.go:177] * Pulling base image ...
I0604 15:30:55.412266    5556 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
I0604 15:30:55.412266    5556 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
I0604 15:30:55.412266    5556 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
I0604 15:30:55.412266    5556 cache.go:57] Caching tarball of preloaded images
I0604 15:30:55.412266    5556 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0604 15:30:55.412266    5556 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
I0604 15:30:55.413196    5556 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220604152644-5712\config.json ...
I0604 15:30:56.421671    5556 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
I0604 15:30:56.422168    5556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
I0604 15:30:56.422275    5556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
I0604 15:30:56.422275    5556 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
I0604 15:30:56.422275    5556 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
I0604 15:30:56.422275    5556 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
I0604 15:30:56.422892    5556 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
I0604 15:30:56.422892    5556 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
I0604 15:30:56.422892    5556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
I0604 15:30:58.630301    5556 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
I0604 15:30:58.630301    5556 cache.go:206] Successfully downloaded all kic artifacts
I0604 15:30:58.630301    5556 start.go:352] acquiring machines lock for functional-20220604152644-5712: {Name:mkd8e5d21c30b3e319f1bf6be936dc9c23190696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0604 15:30:58.630866    5556 start.go:356] acquired machines lock for "functional-20220604152644-5712" in 565µs
I0604 15:30:58.631246    5556 start.go:94] Skipping create...Using existing machine configuration
I0604 15:30:58.631369    5556 fix.go:55] fixHost starting: 
I0604 15:30:58.646862    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:30:59.668919    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:30:59.668919    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0220461s)
I0604 15:30:59.668919    5556 fix.go:103] recreateIfNeeded on functional-20220604152644-5712: state= err=unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:30:59.668919    5556 fix.go:108] machineExists: false. err=machine does not exist
I0604 15:30:59.674682    5556 out.go:177] * docker "functional-20220604152644-5712" container is missing, will recreate.
I0604 15:30:59.677653    5556 delete.go:124] DEMOLISHING functional-20220604152644-5712 ...
I0604 15:30:59.690645    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:00.716522    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:00.716522    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0258669s)
W0604 15:31:00.716522    5556 stop.go:75] unable to get state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:00.716522    5556 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:00.733051    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:01.746737    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:01.746737    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0136755s)
I0604 15:31:01.746737    5556 delete.go:82] Unable to get host status for functional-20220604152644-5712, assuming it has already been deleted: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:01.753490    5556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
W0604 15:31:02.781964    5556 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
I0604 15:31:02.781964    5556 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0284636s)
I0604 15:31:02.781964    5556 kic.go:356] could not find the container functional-20220604152644-5712 to remove it. will try anyways
I0604 15:31:02.790941    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:03.829249    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:03.829279    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0380485s)
W0604 15:31:03.829391    5556 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:03.839208    5556 cli_runner.go:164] Run: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0"
W0604 15:31:04.899079    5556 cli_runner.go:211] docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0" returned with exit code 1
I0604 15:31:04.899079    5556 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": (1.0598033s)
I0604 15:31:04.899079    5556 oci.go:625] error shutdown functional-20220604152644-5712: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:05.922276    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:06.964265    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:06.964265    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.041979s)
I0604 15:31:06.964265    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:06.964265    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:06.964265    5556 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:07.535971    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:08.557603    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:08.557815    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0216217s)
I0604 15:31:08.557815    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:08.557815    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:08.557910    5556 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:09.650639    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:10.678902    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:10.678976    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.028253s)
I0604 15:31:10.679046    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:10.679046    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:10.679119    5556 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:12.009129    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:12.996529    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:12.996529    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:12.996529    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:12.996529    5556 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:14.590588    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:15.627998    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:15.627998    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0373994s)
I0604 15:31:15.627998    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:15.627998    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:15.627998    5556 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:17.980904    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:19.004569    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:19.004569    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0236546s)
I0604 15:31:19.004569    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:19.004569    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:19.004569    5556 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:23.535981    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:24.527488    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:24.527488    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:24.527488    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:24.527488    5556 oci.go:88] couldn't shut down functional-20220604152644-5712 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
I0604 15:31:24.536258    5556 cli_runner.go:164] Run: docker rm -f -v functional-20220604152644-5712
I0604 15:31:25.541131    5556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
W0604 15:31:26.587804    5556 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
I0604 15:31:26.587904    5556 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0464683s)
I0604 15:31:26.595750    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0604 15:31:27.621336    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0604 15:31:27.621336    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0255759s)
I0604 15:31:27.628127    5556 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
I0604 15:31:27.628127    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
W0604 15:31:28.662956    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
I0604 15:31:28.662956    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.0348189s)
I0604 15:31:28.662956    5556 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220604152644-5712
I0604 15:31:28.662956    5556 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220604152644-5712

                                                
                                                
** /stderr **
W0604 15:31:28.664305    5556 delete.go:139] delete failed (probably ok) <nil>
I0604 15:31:28.664305    5556 fix.go:115] Sleeping 1 second for extra luck!
I0604 15:31:29.669267    5556 start.go:131] createHost starting for "" (driver="docker")
I0604 15:31:29.673660    5556 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0604 15:31:29.673920    5556 start.go:165] libmachine.API.Create for "functional-20220604152644-5712" (driver="docker")
I0604 15:31:29.673994    5556 client.go:168] LocalClient.Create starting
I0604 15:31:29.674662    5556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0604 15:31:29.674850    5556 main.go:134] libmachine: Decoding PEM data...
I0604 15:31:29.674907    5556 main.go:134] libmachine: Parsing certificate...
I0604 15:31:29.675174    5556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0604 15:31:29.675368    5556 main.go:134] libmachine: Decoding PEM data...
I0604 15:31:29.675368    5556 main.go:134] libmachine: Parsing certificate...
I0604 15:31:29.684088    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0604 15:31:30.703966    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0604 15:31:30.703966    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.01967s)
I0604 15:31:30.711759    5556 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
I0604 15:31:30.711759    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
W0604 15:31:31.740076    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
I0604 15:31:31.740076    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.0283068s)
I0604 15:31:31.740076    5556 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220604152644-5712
I0604 15:31:31.740076    5556 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220604152644-5712

                                                
                                                
** /stderr **
I0604 15:31:31.747903    5556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0604 15:31:32.756934    5556 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0088405s)
I0604 15:31:32.785386    5556 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00078c940] misses:0}
I0604 15:31:32.785837    5556 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0604 15:31:32.785837    5556 network_create.go:115] attempt to create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0604 15:31:32.793066    5556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712
W0604 15:31:33.805980    5556 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712 returned with exit code 1
I0604 15:31:33.805980    5556 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: (1.0129035s)
E0604 15:31:33.805980    5556 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.49.0/24: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 9bfadd0a80293c3c78f22ea5b509a7746c5f7d1c88db7221afd3e8629b403631 (br-9bfadd0a8029): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
W0604 15:31:33.805980    5556 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 9bfadd0a80293c3c78f22ea5b509a7746c5f7d1c88db7221afd3e8629b403631 (br-9bfadd0a8029): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4

                                                
                                                
I0604 15:31:33.820421    5556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0604 15:31:34.837392    5556 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0169611s)
I0604 15:31:34.847196    5556 cli_runner.go:164] Run: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true
W0604 15:31:35.869493    5556 cli_runner.go:211] docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0604 15:31:35.869493    5556 cli_runner.go:217] Completed: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0222861s)
I0604 15:31:35.869493    5556 client.go:171] LocalClient.Create took 6.1954357s
I0604 15:31:37.890920    5556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0604 15:31:37.896927    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:38.908668    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:38.908668    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0117305s)
I0604 15:31:38.908668    5556 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:39.094086    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:40.103428    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:40.103428    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0093312s)
W0604 15:31:40.103428    5556 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:31:40.103428    5556 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:40.113492    5556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0604 15:31:40.119819    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:41.149827    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:41.149827    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0298269s)
I0604 15:31:41.149966    5556 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:41.363970    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:42.372315    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:42.372315    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0081432s)
W0604 15:31:42.372613    5556 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:31:42.372613    5556 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:42.372613    5556 start.go:134] duration metric: createHost completed in 12.7032179s
I0604 15:31:42.381937    5556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0604 15:31:42.387937    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:43.428496    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:43.428496    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0404418s)
I0604 15:31:43.428496    5556 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:43.777406    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:44.805121    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:44.805173    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0273656s)
W0604 15:31:44.805173    5556 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:31:44.805173    5556 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:44.815106    5556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0604 15:31:44.821027    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:45.851220    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:45.851253    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0300291s)
I0604 15:31:45.851253    5556 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:46.082396    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:47.081532    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
W0604 15:31:47.081532    5556 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:31:47.081532    5556 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:47.081532    5556 fix.go:57] fixHost completed within 48.449699s
I0604 15:31:47.081532    5556 start.go:81] releasing machines lock for "functional-20220604152644-5712", held for 48.4501768s
W0604 15:31:47.081532    5556 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
W0604 15:31:47.082253    5556 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system

                                                
                                                
I0604 15:31:47.082297    5556 start.go:614] Will try again in 5 seconds ...
I0604 15:31:52.094081    5556 start.go:352] acquiring machines lock for functional-20220604152644-5712: {Name:mkd8e5d21c30b3e319f1bf6be936dc9c23190696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0604 15:31:52.094570    5556 start.go:356] acquired machines lock for "functional-20220604152644-5712" in 0s
I0604 15:31:52.094570    5556 start.go:94] Skipping create...Using existing machine configuration
I0604 15:31:52.094570    5556 fix.go:55] fixHost starting: 
I0604 15:31:52.108126    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:53.112277    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:53.112328    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0038835s)
I0604 15:31:53.112328    5556 fix.go:103] recreateIfNeeded on functional-20220604152644-5712: state= err=unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:53.112393    5556 fix.go:108] machineExists: false. err=machine does not exist
I0604 15:31:53.115717    5556 out.go:177] * docker "functional-20220604152644-5712" container is missing, will recreate.
I0604 15:31:53.118909    5556 delete.go:124] DEMOLISHING functional-20220604152644-5712 ...
I0604 15:31:53.131759    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:54.161977    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:54.162157    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0302074s)
W0604 15:31:54.162217    5556 stop.go:75] unable to get state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:54.162217    5556 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:54.177001    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:55.188835    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:55.188884    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0117238s)
I0604 15:31:55.188949    5556 delete.go:82] Unable to get host status for functional-20220604152644-5712, assuming it has already been deleted: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:55.196561    5556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
W0604 15:31:56.230116    5556 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
I0604 15:31:56.230116    5556 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0333506s)
I0604 15:31:56.230350    5556 kic.go:356] could not find the container functional-20220604152644-5712 to remove it. will try anyways
I0604 15:31:56.237898    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:57.232301    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
W0604 15:31:57.232301    5556 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:57.239785    5556 cli_runner.go:164] Run: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0"
W0604 15:31:58.237484    5556 cli_runner.go:211] docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0" returned with exit code 1
I0604 15:31:58.237484    5556 oci.go:625] error shutdown functional-20220604152644-5712: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:59.257274    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:00.280199    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:00.280199    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0229149s)
I0604 15:32:00.280199    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:00.280199    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:00.280199    5556 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:00.782772    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:01.790891    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:01.790891    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0081089s)
I0604 15:32:01.790891    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:01.790891    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:01.790891    5556 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:02.389363    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:03.428979    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:03.428979    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0396055s)
I0604 15:32:03.428979    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:03.428979    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:03.428979    5556 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:04.337425    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:05.378305    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:05.378305    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.04063s)
I0604 15:32:05.378305    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:05.378305    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:05.378305    5556 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:07.386110    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:08.394144    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:08.394144    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0078658s)
I0604 15:32:08.394221    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:08.394221    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:08.394221    5556 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:10.232873    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:11.262583    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:11.262583    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0295905s)
I0604 15:32:11.262798    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:11.262798    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:11.262856    5556 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:13.947004    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:14.975561    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:14.975583    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0283586s)
I0604 15:32:14.975737    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:14.975737    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:14.975737    5556 oci.go:88] couldn't shut down functional-20220604152644-5712 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
I0604 15:32:14.982820    5556 cli_runner.go:164] Run: docker rm -f -v functional-20220604152644-5712
I0604 15:32:16.009593    5556 cli_runner.go:217] Completed: docker rm -f -v functional-20220604152644-5712: (1.0267621s)
I0604 15:32:16.015579    5556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
W0604 15:32:17.055206    5556 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
I0604 15:32:17.055206    5556 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0396165s)
I0604 15:32:17.062643    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0604 15:32:18.074003    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0604 15:32:18.074003    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.01135s)
I0604 15:32:18.081042    5556 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
I0604 15:32:18.081042    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
W0604 15:32:19.101825    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
I0604 15:32:19.101825    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.020662s)
I0604 15:32:19.101906    5556 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220604152644-5712
I0604 15:32:19.101906    5556 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220604152644-5712

                                                
                                                
** /stderr **
W0604 15:32:19.102807    5556 delete.go:139] delete failed (probably ok) <nil>
I0604 15:32:19.102807    5556 fix.go:115] Sleeping 1 second for extra luck!
I0604 15:32:20.117316    5556 start.go:131] createHost starting for "" (driver="docker")
I0604 15:32:20.122026    5556 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0604 15:32:20.122379    5556 start.go:165] libmachine.API.Create for "functional-20220604152644-5712" (driver="docker")
I0604 15:32:20.122379    5556 client.go:168] LocalClient.Create starting
I0604 15:32:20.122379    5556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0604 15:32:20.122952    5556 main.go:134] libmachine: Decoding PEM data...
I0604 15:32:20.123096    5556 main.go:134] libmachine: Parsing certificate...
I0604 15:32:20.123122    5556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0604 15:32:20.123122    5556 main.go:134] libmachine: Decoding PEM data...
I0604 15:32:20.123122    5556 main.go:134] libmachine: Parsing certificate...
I0604 15:32:20.132723    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0604 15:32:21.133703    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0604 15:32:21.133751    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0007917s)
I0604 15:32:21.141371    5556 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
I0604 15:32:21.141371    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
W0604 15:32:22.181926    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
I0604 15:32:22.181926    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.0404367s)
I0604 15:32:22.182086    5556 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220604152644-5712
I0604 15:32:22.182138    5556 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220604152644-5712

                                                
                                                
** /stderr **
I0604 15:32:22.189632    5556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0604 15:32:23.201455    5556 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c940] amended:false}} dirty:map[] misses:0}
I0604 15:32:23.201455    5556 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0604 15:32:23.222020    5556 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c940] amended:true}} dirty:map[192.168.49.0:0xc00078c940 192.168.58.0:0xc00078cb20] misses:0}
I0604 15:32:23.222020    5556 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0604 15:32:23.222020    5556 network_create.go:115] attempt to create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0604 15:32:23.228673    5556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712
W0604 15:32:24.218824    5556 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712 returned with exit code 1
E0604 15:32:24.218824    5556 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.58.0/24: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 743cfe80c21763d426a1586060c77304005966760530933da4fbbe90bb4e37db (br-743cfe80c217): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
W0604 15:32:24.218824    5556 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 743cfe80c21763d426a1586060c77304005966760530933da4fbbe90bb4e37db (br-743cfe80c217): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4

                                                
                                                
I0604 15:32:24.232969    5556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0604 15:32:25.255233    5556 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0222545s)
I0604 15:32:25.263419    5556 cli_runner.go:164] Run: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true
W0604 15:32:26.270514    5556 cli_runner.go:211] docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0604 15:32:26.270514    5556 cli_runner.go:217] Completed: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0069813s)
I0604 15:32:26.270514    5556 client.go:171] LocalClient.Create took 6.1480724s
I0604 15:32:28.287075    5556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0604 15:32:28.294316    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:29.314489    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:29.314489    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0198984s)
I0604 15:32:29.314814    5556 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:29.606965    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:30.637327    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:30.637491    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0303509s)
W0604 15:32:30.637753    5556 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:32:30.637753    5556 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:30.647917    5556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0604 15:32:30.653574    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:31.669080    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:31.669080    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0154957s)
I0604 15:32:31.669080    5556 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:31.887985    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:32.870208    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
W0604 15:32:32.870550    5556 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:32:32.870589    5556 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:32.870614    5556 start.go:134] duration metric: createHost completed in 12.7531682s
I0604 15:32:32.880218    5556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0604 15:32:32.886181    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:33.945699    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:33.945699    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0595077s)
I0604 15:32:33.945699    5556 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:34.269900    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:35.323728    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:35.323728    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0535464s)
W0604 15:32:35.323870    5556 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:32:35.323870    5556 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:35.333519    5556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0604 15:32:35.341484    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:36.364546    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:36.364546    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0228472s)
I0604 15:32:36.364779    5556 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:36.719003    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:37.746424    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:37.746424    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0272252s)
W0604 15:32:37.746623    5556 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:32:37.746623    5556 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:37.746623    5556 fix.go:57] fixHost completed within 45.6515903s
I0604 15:32:37.746623    5556 start.go:81] releasing machines lock for "functional-20220604152644-5712", held for 45.6515903s
W0604 15:32:37.747310    5556 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220604152644-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system

                                                
                                                
I0604 15:32:37.752025    5556 out.go:177] 
W0604 15:32:37.754122    5556 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system

                                                
                                                
W0604 15:32:37.754122    5556 out.go:239] * Suggestion: Restart Docker
W0604 15:32:37.754122    5556 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
I0604 15:32:37.756791    5556 out.go:177] 

                                                
                                                
* 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsCmd (3.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2198680842\001\logs.txt
functional_test.go:1242: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2198680842\001\logs.txt: exit status 80 (4.2072091s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_754.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1244: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2198680842\001\logs.txt failed: exit status 80
functional_test.go:1247: expected empty minikube logs output, but got: 
***
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_754.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr *****
functional_test.go:1220: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
| Command |                Args                 |               Profile               |       User        |    Version     |     Start Time      |      End Time       |
|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
| delete  | --all                               | download-only-20220604151954-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:20 GMT | 04 Jun 22 15:20 GMT |
| delete  | -p                                  | download-only-20220604151954-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:20 GMT | 04 Jun 22 15:20 GMT |
|         | download-only-20220604151954-5712   |                                     |                   |                |                     |                     |
| delete  | -p                                  | download-only-20220604151954-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:20 GMT | 04 Jun 22 15:20 GMT |
|         | download-only-20220604151954-5712   |                                     |                   |                |                     |                     |
| delete  | -p                                  | download-docker-20220604152059-5712 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:21 GMT | 04 Jun 22 15:21 GMT |
|         | download-docker-20220604152059-5712 |                                     |                   |                |                     |                     |
| delete  | -p                                  | binary-mirror-20220604152145-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:21 GMT | 04 Jun 22 15:22 GMT |
|         | binary-mirror-20220604152145-5712   |                                     |                   |                |                     |                     |
| delete  | -p addons-20220604152202-5712       | addons-20220604152202-5712          | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:23 GMT | 04 Jun 22 15:23 GMT |
| delete  | -p nospam-20220604152324-5712       | nospam-20220604152324-5712          | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:26 GMT | 04 Jun 22 15:26 GMT |
| cache   | functional-20220604152644-5712      | functional-20220604152644-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
|         | cache add k8s.gcr.io/pause:3.1      |                                     |                   |                |                     |                     |
| cache   | functional-20220604152644-5712      | functional-20220604152644-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
|         | cache add k8s.gcr.io/pause:3.3      |                                     |                   |                |                     |                     |
| cache   | functional-20220604152644-5712      | functional-20220604152644-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
|         | cache add                           |                                     |                   |                |                     |                     |
|         | k8s.gcr.io/pause:latest             |                                     |                   |                |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.3         | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
| cache   | list                                | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
| cache   | functional-20220604152644-5712      | functional-20220604152644-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
|         | cache reload                        |                                     |                   |                |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.1         | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
| cache   | delete k8s.gcr.io/pause:latest      | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2022/06/04 15:30:48
Running on machine: minikube2
Binary: Built with gc go1.18.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0604 15:30:48.822616    5556 out.go:296] Setting OutFile to fd 636 ...
I0604 15:30:48.877033    5556 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0604 15:30:48.877033    5556 out.go:309] Setting ErrFile to fd 972...
I0604 15:30:48.877089    5556 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0604 15:30:48.888979    5556 out.go:303] Setting JSON to false
I0604 15:30:48.891643    5556 start.go:115] hostinfo: {"hostname":"minikube2","uptime":7720,"bootTime":1654348928,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W0604 15:30:48.891643    5556 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0604 15:30:48.895748    5556 out.go:177] * [functional-20220604152644-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
I0604 15:30:48.900035    5556 notify.go:193] Checking for updates...
I0604 15:30:48.902110    5556 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I0604 15:30:48.904922    5556 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I0604 15:30:48.907495    5556 out.go:177]   - MINIKUBE_LOCATION=14123
I0604 15:30:48.909980    5556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0604 15:30:48.913591    5556 config.go:178] Loaded profile config "functional-20220604152644-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0604 15:30:48.913838    5556 driver.go:358] Setting default libvirt URI to qemu:///system
I0604 15:30:51.415997    5556 docker.go:137] docker version: linux-20.10.16
I0604 15:30:51.424911    5556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0604 15:30:53.352092    5556 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9271617s)
I0604 15:30:53.352092    5556 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-04 15:30:52.3902581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0604 15:30:53.360094    5556 out.go:177] * Using the docker driver based on existing profile
I0604 15:30:53.360094    5556 start.go:284] selected driver: docker
I0604 15:30:53.360094    5556 start.go:806] validating driver "docker" against &{Name:functional-20220604152644-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220604152644-5712 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false}
I0604 15:30:53.360094    5556 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0604 15:30:53.382917    5556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0604 15:30:55.338951    5556 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9558143s)
I0604 15:30:55.339196    5556 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-04 15:30:54.3823936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0604 15:30:55.399237    5556 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0604 15:30:55.399237    5556 cni.go:95] Creating CNI manager for ""
I0604 15:30:55.399237    5556 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0604 15:30:55.399237    5556 start_flags.go:306] config:
{Name:functional-20220604152644-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220604152644-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false}
I0604 15:30:55.404611    5556 out.go:177] * Starting control plane node functional-20220604152644-5712 in cluster functional-20220604152644-5712
I0604 15:30:55.406541    5556 cache.go:120] Beginning downloading kic base image for docker with docker
I0604 15:30:55.408569    5556 out.go:177] * Pulling base image ...
I0604 15:30:55.412266    5556 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
I0604 15:30:55.412266    5556 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
I0604 15:30:55.412266    5556 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
I0604 15:30:55.412266    5556 cache.go:57] Caching tarball of preloaded images
I0604 15:30:55.412266    5556 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0604 15:30:55.412266    5556 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
I0604 15:30:55.413196    5556 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220604152644-5712\config.json ...
I0604 15:30:56.421671    5556 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
I0604 15:30:56.422168    5556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
I0604 15:30:56.422275    5556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
I0604 15:30:56.422275    5556 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
I0604 15:30:56.422275    5556 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
I0604 15:30:56.422275    5556 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
I0604 15:30:56.422892    5556 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
I0604 15:30:56.422892    5556 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
I0604 15:30:56.422892    5556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
I0604 15:30:58.630301    5556 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
I0604 15:30:58.630301    5556 cache.go:206] Successfully downloaded all kic artifacts
I0604 15:30:58.630301    5556 start.go:352] acquiring machines lock for functional-20220604152644-5712: {Name:mkd8e5d21c30b3e319f1bf6be936dc9c23190696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0604 15:30:58.630866    5556 start.go:356] acquired machines lock for "functional-20220604152644-5712" in 565µs
I0604 15:30:58.631246    5556 start.go:94] Skipping create...Using existing machine configuration
I0604 15:30:58.631369    5556 fix.go:55] fixHost starting: 
I0604 15:30:58.646862    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:30:59.668919    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:30:59.668919    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0220461s)
I0604 15:30:59.668919    5556 fix.go:103] recreateIfNeeded on functional-20220604152644-5712: state= err=unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:30:59.668919    5556 fix.go:108] machineExists: false. err=machine does not exist
I0604 15:30:59.674682    5556 out.go:177] * docker "functional-20220604152644-5712" container is missing, will recreate.
I0604 15:30:59.677653    5556 delete.go:124] DEMOLISHING functional-20220604152644-5712 ...
I0604 15:30:59.690645    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:00.716522    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:00.716522    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0258669s)
W0604 15:31:00.716522    5556 stop.go:75] unable to get state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:00.716522    5556 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:00.733051    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:01.746737    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:01.746737    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0136755s)
I0604 15:31:01.746737    5556 delete.go:82] Unable to get host status for functional-20220604152644-5712, assuming it has already been deleted: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:01.753490    5556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
W0604 15:31:02.781964    5556 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
I0604 15:31:02.781964    5556 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0284636s)
I0604 15:31:02.781964    5556 kic.go:356] could not find the container functional-20220604152644-5712 to remove it. will try anyways
I0604 15:31:02.790941    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:03.829249    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:03.829279    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0380485s)
W0604 15:31:03.829391    5556 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:03.839208    5556 cli_runner.go:164] Run: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0"
W0604 15:31:04.899079    5556 cli_runner.go:211] docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0" returned with exit code 1
I0604 15:31:04.899079    5556 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": (1.0598033s)
I0604 15:31:04.899079    5556 oci.go:625] error shutdown functional-20220604152644-5712: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:05.922276    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:06.964265    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:06.964265    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.041979s)
I0604 15:31:06.964265    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:06.964265    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:06.964265    5556 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:07.535971    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:08.557603    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:08.557815    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0216217s)
I0604 15:31:08.557815    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:08.557815    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:08.557910    5556 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:09.650639    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:10.678902    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:10.678976    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.028253s)
I0604 15:31:10.679046    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:10.679046    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:10.679119    5556 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:12.009129    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:12.996529    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:12.996529    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:12.996529    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:12.996529    5556 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:14.590588    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:15.627998    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:15.627998    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0373994s)
I0604 15:31:15.627998    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:15.627998    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:15.627998    5556 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:17.980904    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:19.004569    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:19.004569    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0236546s)
I0604 15:31:19.004569    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:19.004569    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:19.004569    5556 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:23.535981    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:24.527488    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:24.527488    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:24.527488    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:31:24.527488    5556 oci.go:88] couldn't shut down functional-20220604152644-5712 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
I0604 15:31:24.536258    5556 cli_runner.go:164] Run: docker rm -f -v functional-20220604152644-5712
I0604 15:31:25.541131    5556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
W0604 15:31:26.587804    5556 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
I0604 15:31:26.587904    5556 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0464683s)
I0604 15:31:26.595750    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0604 15:31:27.621336    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0604 15:31:27.621336    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0255759s)
I0604 15:31:27.628127    5556 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
I0604 15:31:27.628127    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
W0604 15:31:28.662956    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
I0604 15:31:28.662956    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.0348189s)
I0604 15:31:28.662956    5556 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220604152644-5712
I0604 15:31:28.662956    5556 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220604152644-5712

                                                
                                                
** /stderr **
W0604 15:31:28.664305    5556 delete.go:139] delete failed (probably ok) <nil>
I0604 15:31:28.664305    5556 fix.go:115] Sleeping 1 second for extra luck!
I0604 15:31:29.669267    5556 start.go:131] createHost starting for "" (driver="docker")
I0604 15:31:29.673660    5556 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0604 15:31:29.673920    5556 start.go:165] libmachine.API.Create for "functional-20220604152644-5712" (driver="docker")
I0604 15:31:29.673994    5556 client.go:168] LocalClient.Create starting
I0604 15:31:29.674662    5556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0604 15:31:29.674850    5556 main.go:134] libmachine: Decoding PEM data...
I0604 15:31:29.674907    5556 main.go:134] libmachine: Parsing certificate...
I0604 15:31:29.675174    5556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0604 15:31:29.675368    5556 main.go:134] libmachine: Decoding PEM data...
I0604 15:31:29.675368    5556 main.go:134] libmachine: Parsing certificate...
I0604 15:31:29.684088    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0604 15:31:30.703966    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0604 15:31:30.703966    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.01967s)
I0604 15:31:30.711759    5556 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
I0604 15:31:30.711759    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
W0604 15:31:31.740076    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
I0604 15:31:31.740076    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.0283068s)
I0604 15:31:31.740076    5556 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220604152644-5712
I0604 15:31:31.740076    5556 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220604152644-5712

                                                
                                                
** /stderr **
I0604 15:31:31.747903    5556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0604 15:31:32.756934    5556 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0088405s)
I0604 15:31:32.785386    5556 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00078c940] misses:0}
I0604 15:31:32.785837    5556 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0604 15:31:32.785837    5556 network_create.go:115] attempt to create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0604 15:31:32.793066    5556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712
W0604 15:31:33.805980    5556 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712 returned with exit code 1
I0604 15:31:33.805980    5556 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: (1.0129035s)
E0604 15:31:33.805980    5556 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.49.0/24: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 9bfadd0a80293c3c78f22ea5b509a7746c5f7d1c88db7221afd3e8629b403631 (br-9bfadd0a8029): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
W0604 15:31:33.805980    5556 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 9bfadd0a80293c3c78f22ea5b509a7746c5f7d1c88db7221afd3e8629b403631 (br-9bfadd0a8029): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4

                                                
                                                
I0604 15:31:33.820421    5556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0604 15:31:34.837392    5556 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0169611s)
I0604 15:31:34.847196    5556 cli_runner.go:164] Run: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true
W0604 15:31:35.869493    5556 cli_runner.go:211] docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0604 15:31:35.869493    5556 cli_runner.go:217] Completed: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0222861s)
I0604 15:31:35.869493    5556 client.go:171] LocalClient.Create took 6.1954357s
I0604 15:31:37.890920    5556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0604 15:31:37.896927    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:38.908668    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:38.908668    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0117305s)
I0604 15:31:38.908668    5556 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:39.094086    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:40.103428    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:40.103428    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0093312s)
W0604 15:31:40.103428    5556 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:31:40.103428    5556 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:40.113492    5556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0604 15:31:40.119819    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:41.149827    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:41.149827    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0298269s)
I0604 15:31:41.149966    5556 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:41.363970    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:42.372315    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:42.372315    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0081432s)
W0604 15:31:42.372613    5556 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:31:42.372613    5556 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:42.372613    5556 start.go:134] duration metric: createHost completed in 12.7032179s
I0604 15:31:42.381937    5556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0604 15:31:42.387937    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:43.428496    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:43.428496    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0404418s)
I0604 15:31:43.428496    5556 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:43.777406    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:44.805121    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:44.805173    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0273656s)
W0604 15:31:44.805173    5556 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:31:44.805173    5556 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:44.815106    5556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0604 15:31:44.821027    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:45.851220    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:31:45.851253    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0300291s)
I0604 15:31:45.851253    5556 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:46.082396    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:31:47.081532    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
W0604 15:31:47.081532    5556 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:31:47.081532    5556 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:47.081532    5556 fix.go:57] fixHost completed within 48.449699s
I0604 15:31:47.081532    5556 start.go:81] releasing machines lock for "functional-20220604152644-5712", held for 48.4501768s
W0604 15:31:47.081532    5556 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system
W0604 15:31:47.082253    5556 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system

                                                
                                                
I0604 15:31:47.082297    5556 start.go:614] Will try again in 5 seconds ...
I0604 15:31:52.094081    5556 start.go:352] acquiring machines lock for functional-20220604152644-5712: {Name:mkd8e5d21c30b3e319f1bf6be936dc9c23190696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0604 15:31:52.094570    5556 start.go:356] acquired machines lock for "functional-20220604152644-5712" in 0s
I0604 15:31:52.094570    5556 start.go:94] Skipping create...Using existing machine configuration
I0604 15:31:52.094570    5556 fix.go:55] fixHost starting: 
I0604 15:31:52.108126    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:53.112277    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:53.112328    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0038835s)
I0604 15:31:53.112328    5556 fix.go:103] recreateIfNeeded on functional-20220604152644-5712: state= err=unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:53.112393    5556 fix.go:108] machineExists: false. err=machine does not exist
I0604 15:31:53.115717    5556 out.go:177] * docker "functional-20220604152644-5712" container is missing, will recreate.
I0604 15:31:53.118909    5556 delete.go:124] DEMOLISHING functional-20220604152644-5712 ...
I0604 15:31:53.131759    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:54.161977    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:54.162157    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0302074s)
W0604 15:31:54.162217    5556 stop.go:75] unable to get state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:54.162217    5556 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:54.177001    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:55.188835    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:31:55.188884    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0117238s)
I0604 15:31:55.188949    5556 delete.go:82] Unable to get host status for functional-20220604152644-5712, assuming it has already been deleted: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:55.196561    5556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
W0604 15:31:56.230116    5556 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
I0604 15:31:56.230116    5556 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0333506s)
I0604 15:31:56.230350    5556 kic.go:356] could not find the container functional-20220604152644-5712 to remove it. will try anyways
I0604 15:31:56.237898    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:31:57.232301    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
W0604 15:31:57.232301    5556 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:57.239785    5556 cli_runner.go:164] Run: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0"
W0604 15:31:58.237484    5556 cli_runner.go:211] docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0" returned with exit code 1
I0604 15:31:58.237484    5556 oci.go:625] error shutdown functional-20220604152644-5712: docker exec --privileged -t functional-20220604152644-5712 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:31:59.257274    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:00.280199    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:00.280199    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0229149s)
I0604 15:32:00.280199    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:00.280199    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:00.280199    5556 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:00.782772    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:01.790891    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:01.790891    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0081089s)
I0604 15:32:01.790891    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:01.790891    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:01.790891    5556 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:02.389363    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:03.428979    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:03.428979    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0396055s)
I0604 15:32:03.428979    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:03.428979    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:03.428979    5556 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:04.337425    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:05.378305    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:05.378305    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.04063s)
I0604 15:32:05.378305    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:05.378305    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:05.378305    5556 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:07.386110    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:08.394144    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:08.394144    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0078658s)
I0604 15:32:08.394221    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:08.394221    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:08.394221    5556 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:10.232873    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:11.262583    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:11.262583    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0295905s)
I0604 15:32:11.262798    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:11.262798    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:11.262856    5556 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:13.947004    5556 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
W0604 15:32:14.975561    5556 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
I0604 15:32:14.975583    5556 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (1.0283586s)
I0604 15:32:14.975737    5556 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:14.975737    5556 oci.go:639] temporary error: container functional-20220604152644-5712 status is  but expect it to be exited
I0604 15:32:14.975737    5556 oci.go:88] couldn't shut down functional-20220604152644-5712 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
I0604 15:32:14.982820    5556 cli_runner.go:164] Run: docker rm -f -v functional-20220604152644-5712
I0604 15:32:16.009593    5556 cli_runner.go:217] Completed: docker rm -f -v functional-20220604152644-5712: (1.0267621s)
I0604 15:32:16.015579    5556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220604152644-5712
W0604 15:32:17.055206    5556 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220604152644-5712 returned with exit code 1
I0604 15:32:17.055206    5556 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220604152644-5712: (1.0396165s)
I0604 15:32:17.062643    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0604 15:32:18.074003    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0604 15:32:18.074003    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.01135s)
I0604 15:32:18.081042    5556 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
I0604 15:32:18.081042    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
W0604 15:32:19.101825    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
I0604 15:32:19.101825    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.020662s)
I0604 15:32:19.101906    5556 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220604152644-5712
I0604 15:32:19.101906    5556 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220604152644-5712

                                                
                                                
** /stderr **
W0604 15:32:19.102807    5556 delete.go:139] delete failed (probably ok) <nil>
I0604 15:32:19.102807    5556 fix.go:115] Sleeping 1 second for extra luck!
I0604 15:32:20.117316    5556 start.go:131] createHost starting for "" (driver="docker")
I0604 15:32:20.122026    5556 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0604 15:32:20.122379    5556 start.go:165] libmachine.API.Create for "functional-20220604152644-5712" (driver="docker")
I0604 15:32:20.122379    5556 client.go:168] LocalClient.Create starting
I0604 15:32:20.122379    5556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0604 15:32:20.122952    5556 main.go:134] libmachine: Decoding PEM data...
I0604 15:32:20.123096    5556 main.go:134] libmachine: Parsing certificate...
I0604 15:32:20.123122    5556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0604 15:32:20.123122    5556 main.go:134] libmachine: Decoding PEM data...
I0604 15:32:20.123122    5556 main.go:134] libmachine: Parsing certificate...
I0604 15:32:20.132723    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0604 15:32:21.133703    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0604 15:32:21.133751    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0007917s)
I0604 15:32:21.141371    5556 network_create.go:272] running [docker network inspect functional-20220604152644-5712] to gather additional debugging logs...
I0604 15:32:21.141371    5556 cli_runner.go:164] Run: docker network inspect functional-20220604152644-5712
W0604 15:32:22.181926    5556 cli_runner.go:211] docker network inspect functional-20220604152644-5712 returned with exit code 1
I0604 15:32:22.181926    5556 cli_runner.go:217] Completed: docker network inspect functional-20220604152644-5712: (1.0404367s)
I0604 15:32:22.182086    5556 network_create.go:275] error running [docker network inspect functional-20220604152644-5712]: docker network inspect functional-20220604152644-5712: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220604152644-5712
I0604 15:32:22.182138    5556 network_create.go:277] output of [docker network inspect functional-20220604152644-5712]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220604152644-5712

                                                
                                                
** /stderr **
I0604 15:32:22.189632    5556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0604 15:32:23.201455    5556 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c940] amended:false}} dirty:map[] misses:0}
I0604 15:32:23.201455    5556 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0604 15:32:23.222020    5556 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c940] amended:true}} dirty:map[192.168.49.0:0xc00078c940 192.168.58.0:0xc00078cb20] misses:0}
I0604 15:32:23.222020    5556 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0604 15:32:23.222020    5556 network_create.go:115] attempt to create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0604 15:32:23.228673    5556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712
W0604 15:32:24.218824    5556 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712 returned with exit code 1
E0604 15:32:24.218824    5556 network_create.go:104] error while trying to create docker network functional-20220604152644-5712 192.168.58.0/24: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 743cfe80c21763d426a1586060c77304005966760530933da4fbbe90bb4e37db (br-743cfe80c217): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
W0604 15:32:24.218824    5556 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220604152644-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 743cfe80c21763d426a1586060c77304005966760530933da4fbbe90bb4e37db (br-743cfe80c217): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4

                                                
                                                
I0604 15:32:24.232969    5556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0604 15:32:25.255233    5556 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0222545s)
I0604 15:32:25.263419    5556 cli_runner.go:164] Run: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true
W0604 15:32:26.270514    5556 cli_runner.go:211] docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0604 15:32:26.270514    5556 cli_runner.go:217] Completed: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0069813s)
I0604 15:32:26.270514    5556 client.go:171] LocalClient.Create took 6.1480724s
I0604 15:32:28.287075    5556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0604 15:32:28.294316    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:29.314489    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:29.314489    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0198984s)
I0604 15:32:29.314814    5556 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:29.606965    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:30.637327    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:30.637491    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0303509s)
W0604 15:32:30.637753    5556 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:32:30.637753    5556 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:30.647917    5556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0604 15:32:30.653574    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:31.669080    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:31.669080    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0154957s)
I0604 15:32:31.669080    5556 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:31.887985    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:32.870208    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
W0604 15:32:32.870550    5556 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:32:32.870589    5556 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:32.870614    5556 start.go:134] duration metric: createHost completed in 12.7531682s
I0604 15:32:32.880218    5556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0604 15:32:32.886181    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:33.945699    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:33.945699    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0595077s)
I0604 15:32:33.945699    5556 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:34.269900    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:35.323728    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:35.323728    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0535464s)
W0604 15:32:35.323870    5556 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:32:35.323870    5556 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:35.333519    5556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0604 15:32:35.341484    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:36.364546    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:36.364546    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0228472s)
I0604 15:32:36.364779    5556 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:36.719003    5556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712
W0604 15:32:37.746424    5556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712 returned with exit code 1
I0604 15:32:37.746424    5556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: (1.0272252s)
W0604 15:32:37.746623    5556 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712

                                                
                                                
W0604 15:32:37.746623    5556 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220604152644-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220604152644-5712: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220604152644-5712
I0604 15:32:37.746623    5556 fix.go:57] fixHost completed within 45.6515903s
I0604 15:32:37.746623    5556 start.go:81] releasing machines lock for "functional-20220604152644-5712", held for 45.6515903s
W0604 15:32:37.747310    5556 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220604152644-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system

                                                
                                                
I0604 15:32:37.752025    5556 out.go:177] 
W0604 15:32:37.754122    5556 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220604152644-5712 container: docker volume create functional-20220604152644-5712 --label name.minikube.sigs.k8s.io=functional-20220604152644-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220604152644-5712: error while creating volume root path '/var/lib/docker/volumes/functional-20220604152644-5712': mkdir /var/lib/docker/volumes/functional-20220604152644-5712: read-only file system

                                                
                                                
W0604 15:32:37.754122    5556 out.go:239] * Suggestion: Restart Docker
W0604 15:32:37.754122    5556 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
I0604 15:32:37.756791    5556 out.go:177] 

                                                
                                                
* 
***
--- FAIL: TestFunctional/serial/LogsFileCmd (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (13.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 status: exit status 7 (2.9901901s)

                                                
                                                
-- stdout --
	functional-20220604152644-5712
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:33:19.574822    5740 status.go:258] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	E0604 15:33:19.574822    5740 status.go:261] The "functional-20220604152644-5712" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:848: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 status" : exit status 7
functional_test.go:852: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (2.9964974s)

                                                
                                                
-- stdout --
	host:Nonexistent,kublet:Nonexistent,apiserver:Nonexistent,kubeconfig:Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:33:22.572204    5776 status.go:258] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	E0604 15:33:22.572204    5776 status.go:261] The "functional-20220604152644-5712" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:854: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:864: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 status -o json: exit status 7 (2.9833185s)

                                                
                                                
-- stdout --
	{"Name":"functional-20220604152644-5712","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:33:25.563775    3456 status.go:258] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	E0604 15:33:25.563775    3456 status.go:261] The "functional-20220604152644-5712" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:866: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.1463788s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (3.0099289s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:33:29.739001    6820 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/StatusCmd (13.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (5.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220604152644-5712 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Non-zero exit: kubectl --context functional-20220604152644-5712 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8: exit status 1 (290.2417ms)

                                                
                                                
** stderr ** 
	W0604 15:32:59.620229    6004 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220604152644-5712" does not exist

                                                
                                                
** /stderr **
functional_test.go:1436: failed to create hello-node deployment with this command "kubectl --context functional-20220604152644-5712 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8": exit status 1.
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run:  kubectl --context functional-20220604152644-5712 describe po hello-node
functional_test.go:1405: (dbg) Non-zero exit: kubectl --context functional-20220604152644-5712 describe po hello-node: exit status 1 (308.4867ms)

                                                
                                                
** stderr ** 
	W0604 15:32:59.931140    7768 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1407: "kubectl --context functional-20220604152644-5712 describe po hello-node" failed: exit status 1
functional_test.go:1409: hello-node pod describe:
functional_test.go:1411: (dbg) Run:  kubectl --context functional-20220604152644-5712 logs -l app=hello-node
functional_test.go:1411: (dbg) Non-zero exit: kubectl --context functional-20220604152644-5712 logs -l app=hello-node: exit status 1 (278.129ms)

                                                
                                                
** stderr ** 
	W0604 15:33:00.221103    5596 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1413: "kubectl --context functional-20220604152644-5712 logs -l app=hello-node" failed: exit status 1
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run:  kubectl --context functional-20220604152644-5712 describe svc hello-node
functional_test.go:1417: (dbg) Non-zero exit: kubectl --context functional-20220604152644-5712 describe svc hello-node: exit status 1 (304.37ms)

                                                
                                                
** stderr ** 
	W0604 15:33:00.514929    1668 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1419: "kubectl --context functional-20220604152644-5712 describe svc hello-node" failed: exit status 1
functional_test.go:1421: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.1605671s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (3.0798466s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:33:04.824225    5972 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/ServiceCmd (5.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (5.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220604152644-5712 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20220604152644-5712 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8: exit status 1 (296.8993ms)

                                                
                                                
** stderr ** 
	W0604 15:32:58.529922    1744 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220604152644-5712" does not exist

                                                
                                                
** /stderr **
functional_test.go:1562: failed to create hello-node deployment with this command "kubectl --context functional-20220604152644-5712 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8": exit status 1.
functional_test.go:1527: service test failed - dumping debug information
functional_test.go:1528: -----------------------service failure post-mortem--------------------------------
functional_test.go:1531: (dbg) Run:  kubectl --context functional-20220604152644-5712 describe po hello-node-connect
functional_test.go:1531: (dbg) Non-zero exit: kubectl --context functional-20220604152644-5712 describe po hello-node-connect: exit status 1 (308.4022ms)

                                                
                                                
** stderr ** 
	W0604 15:32:58.838899    5776 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1533: "kubectl --context functional-20220604152644-5712 describe po hello-node-connect" failed: exit status 1
functional_test.go:1535: hello-node pod describe:
functional_test.go:1537: (dbg) Run:  kubectl --context functional-20220604152644-5712 logs -l app=hello-node-connect

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1537: (dbg) Non-zero exit: kubectl --context functional-20220604152644-5712 logs -l app=hello-node-connect: exit status 1 (290.3771ms)

                                                
                                                
** stderr ** 
	W0604 15:32:59.141923    7172 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1539: "kubectl --context functional-20220604152644-5712 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1541: hello-node logs:
functional_test.go:1543: (dbg) Run:  kubectl --context functional-20220604152644-5712 describe svc hello-node-connect

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1543: (dbg) Non-zero exit: kubectl --context functional-20220604152644-5712 describe svc hello-node-connect: exit status 1 (292.4204ms)

                                                
                                                
** stderr ** 
	W0604 15:32:59.441372    4568 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1545: "kubectl --context functional-20220604152644-5712 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1547: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.1502332s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (2.9517483s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:33:03.619882    5976 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (5.33s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (4.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-20220604152644-5712" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.1533547s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (3.1449102s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:32:58.276794    8412 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (10.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "echo hello": exit status 80 (3.2518729s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_19232f4b01a263c7fe4da55009757983856b4b95_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1659: failed to run an ssh command. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh \"echo hello\"" : exit status 80
functional_test.go:1663: expected minikube ssh command output to be -"hello"- but got *"\n\n"*. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh \"echo hello\""
functional_test.go:1671: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "cat /etc/hostname": exit status 80 (3.2783404s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_38bcdef24fb924cc90e97c11e7d475c51b658987_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1677: failed to run an ssh command. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh \"cat /etc/hostname\"" : exit status 80
functional_test.go:1681: expected minikube ssh command output to be -"functional-20220604152644-5712"- but got *"\n\n"*. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/SSHCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.1415872s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (2.9467484s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:33:43.652212    8760 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/SSHCmd (10.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (13.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cp testdata\cp-test.txt /home/docker/cp-test.txt: exit status 80 (3.3272866s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                    │
	│    * If the above advice does not help, please let us know:                                                        │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                      │
	│                                                                                                                    │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                           │
	│    * Please also attach the following file to the GitHub issue:                                                    │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_cp_61e6e7c82587b4e90872857c87eff14ac40e447c_1.log    │
	│                                                                                                                    │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:559: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cp testdata\\cp-test.txt /home/docker/cp-test.txt" : exit status 80
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh -n functional-20220604152644-5712 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh -n functional-20220604152644-5712 "sudo cat /home/docker/cp-test.txt": exit status 80 (3.2329424s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f9fbdc48f4e6e25fa352a85c2bd7e3c2c13fee65_12.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:537: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh -n functional-20220604152644-5712 \"sudo cat /home/docker/cp-test.txt\"" : exit status 80
helpers_test.go:571: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"\n\n",
)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cp functional-20220604152644-5712:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd3963414763\001\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cp functional-20220604152644-5712:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd3963414763\001\cp-test.txt: exit status 80 (3.2812044s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                    │
	│    * If the above advice does not help, please let us know:                                                        │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                      │
	│                                                                                                                    │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                           │
	│    * Please also attach the following file to the GitHub issue:                                                    │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_cp_722f025bc30c51c573800ee8614ea3d0fde6adcf_0.log    │
	│                                                                                                                    │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:559: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cp functional-20220604152644-5712:/home/docker/cp-test.txt C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\TestFunctionalparallelCpCmd3963414763\\001\\cp-test.txt" : exit status 80
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh -n functional-20220604152644-5712 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh -n functional-20220604152644-5712 "sudo cat /home/docker/cp-test.txt": exit status 80 (3.3429068s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_dashboard_55f3863523053bb6201ebb67625de287d9eed8d4_2.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:537: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh -n functional-20220604152644-5712 \"sudo cat /home/docker/cp-test.txt\"" : exit status 80
helpers_test.go:526: failed to read test file 'testdata/cp-test.txt' : open C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd3963414763\001\cp-test.txt: The system cannot find the file specified.
helpers_test.go:571: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"\n\n",
+ 	"",
)
--- FAIL: TestFunctional/parallel/CpCmd (13.20s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (4.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220604152644-5712 replace --force -f testdata\mysql.yaml
functional_test.go:1719: (dbg) Non-zero exit: kubectl --context functional-20220604152644-5712 replace --force -f testdata\mysql.yaml: exit status 1 (323.5965ms)

                                                
                                                
** stderr ** 
	W0604 15:33:07.755182    4748 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220604152644-5712" does not exist

                                                
                                                
** /stderr **
functional_test.go:1721: failed to kubectl replace mysql: args "kubectl --context functional-20220604152644-5712 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.2034537s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (3.0622509s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:33:12.113993    3604 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/MySQL (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (7.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/5712/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /etc/test/nested/copy/5712/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1857: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /etc/test/nested/copy/5712/hosts": exit status 80 (3.2709894s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_442f1b910de520aaf8a9ce3340540e518c9ff962_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1859: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /etc/test/nested/copy/5712/hosts" failed: exit status 80
functional_test.go:1862: file sync test content: 

                                                
                                                
functional_test.go:1872: /etc/sync.test content mismatch (-want +got):
string(
- 	"Test file for checking file sync process",
+ 	"\n\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/FileSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.203646s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (3.0280563s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:33:11.116224    7752 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/FileSync (7.51s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (23.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/5712.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /etc/ssl/certs/5712.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /etc/ssl/certs/5712.pem": exit status 80 (3.2780679s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_bf6b388c95f882df0312ef7cc46a66574d6ed110_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1901: failed to check existence of "/etc/ssl/certs/5712.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh \"sudo cat /etc/ssl/certs/5712.pem\"": exit status 80
functional_test.go:1907: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/5712.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/5712.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /usr/share/ca-certificates/5712.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /usr/share/ca-certificates/5712.pem": exit status 80 (3.3201707s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_523ffa5e8b6acdd9d4a4422593d833593f7e7b4e_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1901: failed to check existence of "/usr/share/ca-certificates/5712.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh \"sudo cat /usr/share/ca-certificates/5712.pem\"": exit status 80
functional_test.go:1907: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/5712.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 80 (3.3035236s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_fea49abfab0323d8512b535581403500420d48f0_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1901: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 80
functional_test.go:1907: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1925: Checking for existence of /etc/ssl/certs/57122.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /etc/ssl/certs/57122.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /etc/ssl/certs/57122.pem": exit status 80 (3.2179974s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_15b6d003fc837972c0806d3127409faa656eb0c9_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1928: failed to check existence of "/etc/ssl/certs/57122.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh \"sudo cat /etc/ssl/certs/57122.pem\"": exit status 80
functional_test.go:1934: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/57122.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/57122.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /usr/share/ca-certificates/57122.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /usr/share/ca-certificates/57122.pem": exit status 80 (3.2501266s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_4bde4b694e03ee4d48e9de8801bc7ca2cf2696db_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1928: failed to check existence of "/usr/share/ca-certificates/57122.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh \"sudo cat /usr/share/ca-certificates/57122.pem\"": exit status 80
functional_test.go:1934: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/57122.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 80 (3.2287931s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_15a8ec4b54c4600ccdf64f589dd9f75cfe823832_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1928: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 80
functional_test.go:1934: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/CertSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.1342308s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (2.9807541s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:33:28.551917    1536 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/CertSync (23.73s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (4.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220604152644-5712 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Non-zero exit: kubectl --context functional-20220604152644-5712 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (307.1769ms)

                                                
                                                
** stderr ** 
	W0604 15:33:12.357967    3928 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:216: failed to 'kubectl get nodes' with args "kubectl --context functional-20220604152644-5712 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:222: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	W0604 15:33:12.357967    3928 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:222: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	W0604 15:33:12.357967    3928 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:222: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	W0604 15:33:12.357967    3928 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:222: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	W0604 15:33:12.357967    3928 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220604152644-5712
	* cluster has no server defined

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220604152644-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220604152644-5712: exit status 1 (1.1464013s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220604152644-5712 -n functional-20220604152644-5712: exit status 7 (3.0070578s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:33:16.576237    2952 status.go:247] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220604152644-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/NodeLabels (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh "sudo systemctl is-active crio": exit status 80 (3.2835332s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_4c116c6236290140afdbb5dcaafee8e0c3250b76_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1956: output of 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_4c116c6236290140afdbb5dcaafee8e0c3250b76_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **: exit status 80
functional_test.go:1959: For runtime "docker": expected "crio" to be inactive but got "\n\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220604152644-5712"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220604152644-5712": exit status 1 (9.4292148s)

                                                
                                                
-- stdout --
	functional-20220604152644-5712
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_docker-env_547776f721aba6dceba24106cb61c1127a06fa4f_3.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	false : The term 'false' is not recognized as the name of a cmdlet, function, script file, or operable program. Check 
	the spelling of the name, or if a path was included, verify that the path is correct and try again.
	At line:1 char:1
	+ false exit code 80
	+ ~~~~~
	    + CategoryInfo          : ObjectNotFound: (false:String) [], CommandNotFoundException
	    + FullyQualifiedErrorId : CommandNotFoundException
	 
	E0604 15:33:03.251516    8736 status.go:258] status error: host: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	E0604 15:33:03.251516    8736 status.go:261] The "functional-20220604152644-5712" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:497: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/powershell (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:143: failed to get Kubernetes client for "functional-20220604152644-5712": client config: context "functional-20220604152644-5712" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 update-context --alsologtostderr -v=2: exit status 80 (3.2400847s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:33:39.589249    6516 out.go:296] Setting OutFile to fd 956 ...
	I0604 15:33:39.663039    6516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:33:39.663069    6516 out.go:309] Setting ErrFile to fd 624...
	I0604 15:33:39.663128    6516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:33:39.675268    6516 mustload.go:65] Loading cluster: functional-20220604152644-5712
	I0604 15:33:39.676725    6516 config.go:178] Loaded profile config "functional-20220604152644-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 15:33:39.697258    6516 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:33:42.312530    6516 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:33:42.312530    6516 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (2.615245s)
	I0604 15:33:42.315919    6516 out.go:177] 
	W0604 15:33:42.317786    6516 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:33:42.317786    6516 out.go:239] * 
	* 
	W0604 15:33:42.574125    6516 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_4.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_4.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 15:33:42.574561    6516 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:2047: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:2052: update-context: got="\n\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 update-context --alsologtostderr -v=2: exit status 80 (3.3533641s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:33:42.862183    5596 out.go:296] Setting OutFile to fd 556 ...
	I0604 15:33:42.930592    5596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:33:42.930592    5596 out.go:309] Setting ErrFile to fd 696...
	I0604 15:33:42.930592    5596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:33:42.944221    5596 mustload.go:65] Loading cluster: functional-20220604152644-5712
	I0604 15:33:42.944490    5596 config.go:178] Loaded profile config "functional-20220604152644-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 15:33:42.963069    5596 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:33:45.658065    5596 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:33:45.658252    5596 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (2.6947992s)
	I0604 15:33:45.662633    5596 out.go:177] 
	W0604 15:33:45.664801    5596 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:33:45.664854    5596 out.go:239] * 
	* 
	W0604 15:33:45.921991    5596 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_4.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_4.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 15:33:45.924628    5596 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:2047: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:2052: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 update-context --alsologtostderr -v=2: exit status 80 (3.2337856s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:33:41.675847    1076 out.go:296] Setting OutFile to fd 804 ...
	I0604 15:33:41.748871    1076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:33:41.748871    1076 out.go:309] Setting ErrFile to fd 716...
	I0604 15:33:41.748871    1076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:33:41.761555    1076 mustload.go:65] Loading cluster: functional-20220604152644-5712
	I0604 15:33:41.762388    1076 config.go:178] Loaded profile config "functional-20220604152644-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 15:33:41.777231    1076 cli_runner.go:164] Run: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}
	W0604 15:33:44.363037    1076 cli_runner.go:211] docker container inspect functional-20220604152644-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:33:44.363037    1076 cli_runner.go:217] Completed: docker container inspect functional-20220604152644-5712 --format={{.State.Status}}: (2.5857797s)
	I0604 15:33:44.367001    1076 out.go:177] 
	W0604 15:33:44.370009    1076 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	W0604 15:33:44.370009    1076 out.go:239] * 
	* 
	W0604 15:33:44.639016    1076 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_4.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_4.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 15:33:44.642006    1076 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:2047: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20220604152644-5712 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:2052: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (3.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls --format short: (3.0575957s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls --format short:

                                                
                                                
functional_test.go:270: expected k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (3.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls --format table: (2.8908064s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls --format table:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:270: expected | k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (3.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls --format json: (3.0296104s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls --format json:
[]
functional_test.go:270: expected ["k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (2.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls --format yaml: (2.9349972s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls --format yaml:
[]

                                                
                                                
functional_test.go:270: expected - k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (2.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 ssh pgrep buildkitd: exit status 80 (3.2495705s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f5578f3b7737bbd9a15ad6eab50db6197ebdaf5a_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image build -t localhost/my-image:functional-20220604152644-5712 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image build -t localhost/my-image:functional-20220604152644-5712 testdata\build: (2.8753245s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls: (2.8102285s)
functional_test.go:438: expected "localhost/my-image:functional-20220604152644-5712" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (8.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Non-zero exit: docker pull gcr.io/google-containers/addon-resizer:1.8.8: exit status 1 (2.1550123s)

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
functional_test.go:339: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 version -o=json --components: exit status 80 (3.1525954s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220604152644-5712": docker container inspect functional-20220604152644-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220604152644-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_version_584df66c7473738ba6bddab8b00bad09d875c20e_2.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2198: error version: exit status 80
functional_test.go:2203: expected to see "buildctl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "commit" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "containerd" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "crictl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "crio" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "ctr" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "docker" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "minikubeVersion" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "podman" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "run" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "crun" in the minikube version --components but got:

                                                
                                                

                                                
                                                
--- FAIL: TestFunctional/parallel/Version/components (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220604152644-5712: (3.2807187s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls: (2.991209s)
functional_test.go:438: expected "gcr.io/google-containers/addon-resizer:functional-20220604152644-5712" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220604152644-5712: (3.3458848s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls: (3.1219998s)
functional_test.go:438: expected "gcr.io/google-containers/addon-resizer:functional-20220604152644-5712" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Non-zero exit: docker pull gcr.io/google-containers/addon-resizer:1.8.9: exit status 1 (2.0888462s)

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
functional_test.go:232: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image save gcr.io/google-containers/addon-resizer:functional-20220604152644-5712 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image save gcr.io/google-containers/addon-resizer:functional-20220604152644-5712 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (3.120934s)
functional_test.go:381: expected "C:\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: exit status 80 (2.2456358s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_IMAGE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_4f97aa0f12ba576a16ca2b05292f7afcda49921e_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:406: loading image into minikube from file: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_IMAGE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_4f97aa0f12ba576a16ca2b05292f7afcda49921e_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220604152644-5712
functional_test.go:414: (dbg) Non-zero exit: docker rmi gcr.io/google-containers/addon-resizer:functional-20220604152644-5712: exit status 1 (1.0785298s)

                                                
                                                
** stderr ** 
	Error: No such image: gcr.io/google-containers/addon-resizer:functional-20220604152644-5712

                                                
                                                
** /stderr **
functional_test.go:416: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error: No such image: gcr.io/google-containers/addon-resizer:functional-20220604152644-5712

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.09s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (76.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220604153841-5712 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220604153841-5712 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: exit status 60 (1m16.0847948s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220604153841-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node ingress-addon-legacy-20220604153841-5712 in cluster ingress-addon-legacy-20220604153841-5712
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* docker "ingress-addon-legacy-20220604153841-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:38:42.184489    7268 out.go:296] Setting OutFile to fd 656 ...
	I0604 15:38:42.238936    7268 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:38:42.238936    7268 out.go:309] Setting ErrFile to fd 636...
	I0604 15:38:42.238936    7268 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:38:42.252104    7268 out.go:303] Setting JSON to false
	I0604 15:38:42.254098    7268 start.go:115] hostinfo: {"hostname":"minikube2","uptime":8194,"bootTime":1654348928,"procs":146,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 15:38:42.254098    7268 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 15:38:42.266006    7268 out.go:177] * [ingress-addon-legacy-20220604153841-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 15:38:42.270155    7268 notify.go:193] Checking for updates...
	I0604 15:38:42.272298    7268 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 15:38:42.274377    7268 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 15:38:42.276557    7268 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 15:38:42.280044    7268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 15:38:42.282287    7268 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 15:38:44.818955    7268 docker.go:137] docker version: linux-20.10.16
	I0604 15:38:44.827186    7268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:38:46.738389    7268 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9111828s)
	I0604 15:38:46.738389    7268 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 15:38:45.7941412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:38:46.742595    7268 out.go:177] * Using the docker driver based on user configuration
	I0604 15:38:46.745343    7268 start.go:284] selected driver: docker
	I0604 15:38:46.745343    7268 start.go:806] validating driver "docker" against <nil>
	I0604 15:38:46.745343    7268 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 15:38:46.868607    7268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:38:48.809837    7268 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9410508s)
	I0604 15:38:48.810107    7268 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 15:38:47.8517734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:38:48.810107    7268 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 15:38:48.811306    7268 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 15:38:48.815279    7268 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 15:38:48.817586    7268 cni.go:95] Creating CNI manager for ""
	I0604 15:38:48.817798    7268 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 15:38:48.817798    7268 start_flags.go:306] config:
	{Name:ingress-addon-legacy-20220604153841-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220604153841-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 15:38:48.820170    7268 out.go:177] * Starting control plane node ingress-addon-legacy-20220604153841-5712 in cluster ingress-addon-legacy-20220604153841-5712
	I0604 15:38:48.830099    7268 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 15:38:48.832560    7268 out.go:177] * Pulling base image ...
	I0604 15:38:48.835580    7268 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0604 15:38:48.835781    7268 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 15:38:48.893147    7268 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0604 15:38:48.893771    7268 cache.go:57] Caching tarball of preloaded images
	I0604 15:38:48.894270    7268 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0604 15:38:48.896790    7268 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0604 15:38:48.902448    7268 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0604 15:38:48.977941    7268 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0604 15:38:49.937420    7268 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 15:38:49.937420    7268 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:38:49.937420    7268 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:38:49.937420    7268 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 15:38:49.937420    7268 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 15:38:49.937420    7268 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 15:38:49.937420    7268 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 15:38:49.937420    7268 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 15:38:49.937420    7268 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:38:52.233971    7268 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-1610661795: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-1610661795: read-only file system"}
	I0604 15:38:52.233971    7268 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 15:38:52.499799    7268 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0604 15:38:52.500386    7268 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0604 15:38:53.682925    7268 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0604 15:38:53.683799    7268 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220604153841-5712\config.json ...
	I0604 15:38:53.684351    7268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220604153841-5712\config.json: {Name:mkad2dc6069ba8942f9e7434530307ef1ce0b236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 15:38:53.685684    7268 cache.go:206] Successfully downloaded all kic artifacts
	I0604 15:38:53.685684    7268 start.go:352] acquiring machines lock for ingress-addon-legacy-20220604153841-5712: {Name:mk11819ccef3b53238da5667084b68f43fe5b0ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 15:38:53.686058    7268 start.go:356] acquired machines lock for "ingress-addon-legacy-20220604153841-5712" in 133.2µs
	I0604 15:38:53.686248    7268 start.go:91] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220604153841-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-202206041
53841-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 15:38:53.686248    7268 start.go:131] createHost starting for "" (driver="docker")
	I0604 15:38:54.011475    7268 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0604 15:38:54.012435    7268 start.go:165] libmachine.API.Create for "ingress-addon-legacy-20220604153841-5712" (driver="docker")
	I0604 15:38:54.012525    7268 client.go:168] LocalClient.Create starting
	I0604 15:38:54.012797    7268 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 15:38:54.013635    7268 main.go:134] libmachine: Decoding PEM data...
	I0604 15:38:54.013716    7268 main.go:134] libmachine: Parsing certificate...
	I0604 15:38:54.013979    7268 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 15:38:54.013979    7268 main.go:134] libmachine: Decoding PEM data...
	I0604 15:38:54.013979    7268 main.go:134] libmachine: Parsing certificate...
	I0604 15:38:54.023448    7268 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220604153841-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:38:55.073910    7268 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220604153841-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:38:55.073910    7268 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220604153841-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0501617s)
	I0604 15:38:55.082744    7268 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220604153841-5712] to gather additional debugging logs...
	I0604 15:38:55.082744    7268 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220604153841-5712
	W0604 15:38:56.104261    7268 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:38:56.104261    7268 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220604153841-5712: (1.0215061s)
	I0604 15:38:56.104261    7268 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220604153841-5712]: docker network inspect ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220604153841-5712
	I0604 15:38:56.104261    7268 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220604153841-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220604153841-5712
	
	** /stderr **
	I0604 15:38:56.112924    7268 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 15:38:57.150715    7268 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0377795s)
	I0604 15:38:57.172113    7268 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00015c318] misses:0}
	I0604 15:38:57.173199    7268 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:38:57.173199    7268 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220604153841-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 15:38:57.181582    7268 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220604153841-5712
	W0604 15:38:58.215550    7268 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:38:58.215550    7268 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220604153841-5712: (1.0339574s)
	E0604 15:38:58.215550    7268 network_create.go:104] error while trying to create docker network ingress-addon-legacy-20220604153841-5712 192.168.49.0/24: create docker network ingress-addon-legacy-20220604153841-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3e7a5adf7b6e5b6a65a18f0139ec170fd7486d3a9d6467ad812c0100e65bda1f (br-3e7a5adf7b6e): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 15:38:58.215550    7268 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220604153841-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3e7a5adf7b6e5b6a65a18f0139ec170fd7486d3a9d6467ad812c0100e65bda1f (br-3e7a5adf7b6e): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220604153841-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3e7a5adf7b6e5b6a65a18f0139ec170fd7486d3a9d6467ad812c0100e65bda1f (br-3e7a5adf7b6e): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 15:38:58.231722    7268 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 15:38:59.289207    7268 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0574738s)
	I0604 15:38:59.297293    7268 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 15:39:00.333970    7268 cli_runner.go:211] docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 15:39:00.334047    7268 cli_runner.go:217] Completed: docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0360035s)
	I0604 15:39:00.334169    7268 client.go:171] LocalClient.Create took 6.3215376s
	I0604 15:39:02.353661    7268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:39:02.362334    7268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712
	W0604 15:39:03.395986    7268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:03.395986    7268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: (1.0336407s)
	I0604 15:39:03.395986    7268 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:03.686523    7268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712
	W0604 15:39:04.755430    7268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:04.755430    7268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: (1.0688956s)
	W0604 15:39:04.755430    7268 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	
	W0604 15:39:04.755430    7268 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:04.766376    7268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:39:04.773366    7268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712
	W0604 15:39:05.813180    7268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:05.813180    7268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: (1.0396441s)
	I0604 15:39:05.813314    7268 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:06.122337    7268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712
	W0604 15:39:07.161728    7268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:07.161890    7268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: (1.039163s)
	W0604 15:39:07.161890    7268 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	
	W0604 15:39:07.161890    7268 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:07.161890    7268 start.go:134] duration metric: createHost completed in 13.4755022s
	I0604 15:39:07.161890    7268 start.go:81] releasing machines lock for "ingress-addon-legacy-20220604153841-5712", held for 13.4756915s
	W0604 15:39:07.161890    7268 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220604153841-5712 container: docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220604153841-5712: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712: read-only file system
	I0604 15:39:07.177303    7268 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:39:08.224068    7268 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:39:08.224149    7268 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: (1.0466747s)
	I0604 15:39:08.224200    7268 delete.go:82] Unable to get host status for ingress-addon-legacy-20220604153841-5712, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	W0604 15:39:08.224626    7268 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220604153841-5712 container: docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220604153841-5712: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220604153841-5712 container: docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220604153841-5712: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712: read-only file system
	
	I0604 15:39:08.224667    7268 start.go:614] Will try again in 5 seconds ...
	I0604 15:39:13.236760    7268 start.go:352] acquiring machines lock for ingress-addon-legacy-20220604153841-5712: {Name:mk11819ccef3b53238da5667084b68f43fe5b0ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 15:39:13.236760    7268 start.go:356] acquired machines lock for "ingress-addon-legacy-20220604153841-5712" in 0s
	I0604 15:39:13.236760    7268 start.go:94] Skipping create...Using existing machine configuration
	I0604 15:39:13.236760    7268 fix.go:55] fixHost starting: 
	I0604 15:39:13.254021    7268 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:39:14.269909    7268 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:39:14.269957    7268 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: (1.0156863s)
	I0604 15:39:14.270069    7268 fix.go:103] recreateIfNeeded on ingress-addon-legacy-20220604153841-5712: state= err=unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:14.270069    7268 fix.go:108] machineExists: false. err=machine does not exist
	I0604 15:39:14.298748    7268 out.go:177] * docker "ingress-addon-legacy-20220604153841-5712" container is missing, will recreate.
	I0604 15:39:14.301378    7268 delete.go:124] DEMOLISHING ingress-addon-legacy-20220604153841-5712 ...
	I0604 15:39:14.316060    7268 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:39:15.362351    7268 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:39:15.362351    7268 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: (1.0462798s)
	W0604 15:39:15.362351    7268 stop.go:75] unable to get state: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:15.362351    7268 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:15.377891    7268 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:39:16.379470    7268 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:39:16.379737    7268 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: (1.0015686s)
	I0604 15:39:16.379835    7268 delete.go:82] Unable to get host status for ingress-addon-legacy-20220604153841-5712, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:16.387954    7268 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20220604153841-5712
	W0604 15:39:17.407841    7268 cli_runner.go:211] docker container inspect -f {{.Id}} ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:17.407841    7268 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} ingress-addon-legacy-20220604153841-5712: (1.0198769s)
	I0604 15:39:17.407841    7268 kic.go:356] could not find the container ingress-addon-legacy-20220604153841-5712 to remove it. will try anyways
	I0604 15:39:17.415628    7268 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:39:18.470266    7268 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:39:18.470266    7268 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: (1.0546269s)
	W0604 15:39:18.470266    7268 oci.go:84] error getting container status, will try to delete anyways: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:18.478755    7268 cli_runner.go:164] Run: docker exec --privileged -t ingress-addon-legacy-20220604153841-5712 /bin/bash -c "sudo init 0"
	W0604 15:39:19.480622    7268 cli_runner.go:211] docker exec --privileged -t ingress-addon-legacy-20220604153841-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 15:39:19.480938    7268 cli_runner.go:217] Completed: docker exec --privileged -t ingress-addon-legacy-20220604153841-5712 /bin/bash -c "sudo init 0": (1.0017286s)
	I0604 15:39:19.481009    7268 oci.go:625] error shutdown ingress-addon-legacy-20220604153841-5712: docker exec --privileged -t ingress-addon-legacy-20220604153841-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:20.498052    7268 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:39:21.506014    7268 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:39:21.506014    7268 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: (1.0079516s)
	I0604 15:39:21.506014    7268 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:21.506014    7268 oci.go:639] temporary error: container ingress-addon-legacy-20220604153841-5712 status is  but expect it to be exited
	I0604 15:39:21.506014    7268 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:21.991999    7268 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:39:23.035116    7268 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:39:23.035147    7268 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: (1.0428498s)
	I0604 15:39:23.035258    7268 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:23.035258    7268 oci.go:639] temporary error: container ingress-addon-legacy-20220604153841-5712 status is  but expect it to be exited
	I0604 15:39:23.035258    7268 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:23.936244    7268 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:39:24.945894    7268 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:39:24.946096    7268 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: (1.0096398s)
	I0604 15:39:24.946159    7268 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:24.946159    7268 oci.go:639] temporary error: container ingress-addon-legacy-20220604153841-5712 status is  but expect it to be exited
	I0604 15:39:24.946159    7268 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:25.594649    7268 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:39:26.603668    7268 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:39:26.603668    7268 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: (1.0090083s)
	I0604 15:39:26.603668    7268 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:26.603668    7268 oci.go:639] temporary error: container ingress-addon-legacy-20220604153841-5712 status is  but expect it to be exited
	I0604 15:39:26.603668    7268 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:27.720959    7268 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:39:28.739164    7268 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:39:28.739164    7268 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: (1.0180641s)
	I0604 15:39:28.739248    7268 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:28.739248    7268 oci.go:639] temporary error: container ingress-addon-legacy-20220604153841-5712 status is  but expect it to be exited
	I0604 15:39:28.739349    7268 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:30.271603    7268 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:39:31.265672    7268 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:39:31.265672    7268 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:31.265672    7268 oci.go:639] temporary error: container ingress-addon-legacy-20220604153841-5712 status is  but expect it to be exited
	I0604 15:39:31.265672    7268 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:34.317378    7268 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:39:35.356527    7268 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:39:35.356621    7268 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: (1.0389327s)
	I0604 15:39:35.356807    7268 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:35.356877    7268 oci.go:639] temporary error: container ingress-addon-legacy-20220604153841-5712 status is  but expect it to be exited
	I0604 15:39:35.356971    7268 oci.go:88] couldn't shut down ingress-addon-legacy-20220604153841-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	 
	I0604 15:39:35.365857    7268 cli_runner.go:164] Run: docker rm -f -v ingress-addon-legacy-20220604153841-5712
	I0604 15:39:36.378078    7268 cli_runner.go:217] Completed: docker rm -f -v ingress-addon-legacy-20220604153841-5712: (1.0122103s)
	I0604 15:39:36.386742    7268 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20220604153841-5712
	W0604 15:39:37.393152    7268 cli_runner.go:211] docker container inspect -f {{.Id}} ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:37.393152    7268 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} ingress-addon-legacy-20220604153841-5712: (1.0063996s)
	I0604 15:39:37.401290    7268 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220604153841-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:39:38.436748    7268 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220604153841-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:39:38.436748    7268 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220604153841-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0351228s)
	I0604 15:39:38.446114    7268 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220604153841-5712] to gather additional debugging logs...
	I0604 15:39:38.446114    7268 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220604153841-5712
	W0604 15:39:39.436805    7268 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:39.437010    7268 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220604153841-5712]: docker network inspect ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:39.437010    7268 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220604153841-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220604153841-5712
	
	** /stderr **
	W0604 15:39:39.437950    7268 delete.go:139] delete failed (probably ok) <nil>
	I0604 15:39:39.437950    7268 fix.go:115] Sleeping 1 second for extra luck!
	I0604 15:39:40.453001    7268 start.go:131] createHost starting for "" (driver="docker")
	I0604 15:39:40.458535    7268 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0604 15:39:40.459236    7268 start.go:165] libmachine.API.Create for "ingress-addon-legacy-20220604153841-5712" (driver="docker")
	I0604 15:39:40.459236    7268 client.go:168] LocalClient.Create starting
	I0604 15:39:40.459236    7268 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 15:39:40.459989    7268 main.go:134] libmachine: Decoding PEM data...
	I0604 15:39:40.459989    7268 main.go:134] libmachine: Parsing certificate...
	I0604 15:39:40.459989    7268 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 15:39:40.459989    7268 main.go:134] libmachine: Decoding PEM data...
	I0604 15:39:40.459989    7268 main.go:134] libmachine: Parsing certificate...
	I0604 15:39:40.469647    7268 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220604153841-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:39:41.441912    7268 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220604153841-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:39:41.450369    7268 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220604153841-5712] to gather additional debugging logs...
	I0604 15:39:41.450369    7268 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220604153841-5712
	W0604 15:39:42.436005    7268 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:42.436005    7268 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220604153841-5712]: docker network inspect ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:42.436096    7268 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220604153841-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220604153841-5712
	
	** /stderr **
	I0604 15:39:42.444095    7268 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 15:39:43.453905    7268 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00015c318] amended:false}} dirty:map[] misses:0}
	I0604 15:39:43.454720    7268 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:39:43.469909    7268 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00015c318] amended:true}} dirty:map[192.168.49.0:0xc00015c318 192.168.58.0:0xc000586870] misses:0}
	I0604 15:39:43.469909    7268 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:39:43.469909    7268 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220604153841-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 15:39:43.475917    7268 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220604153841-5712
	W0604 15:39:44.472446    7268 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	E0604 15:39:44.472446    7268 network_create.go:104] error while trying to create docker network ingress-addon-legacy-20220604153841-5712 192.168.58.0/24: create docker network ingress-addon-legacy-20220604153841-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a175c54f865da14b077df8b4dce19c496a3953c6c01aeb97a951fc190d6edfd8 (br-a175c54f865d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 15:39:44.472446    7268 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220604153841-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a175c54f865da14b077df8b4dce19c496a3953c6c01aeb97a951fc190d6edfd8 (br-a175c54f865d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220604153841-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a175c54f865da14b077df8b4dce19c496a3953c6c01aeb97a951fc190d6edfd8 (br-a175c54f865d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 15:39:44.487054    7268 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 15:39:45.492900    7268 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0056223s)
	I0604 15:39:45.501165    7268 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 15:39:46.527376    7268 cli_runner.go:211] docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 15:39:46.527376    7268 cli_runner.go:217] Completed: docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0262002s)
	I0604 15:39:46.527376    7268 client.go:171] LocalClient.Create took 6.0680772s
	I0604 15:39:48.545511    7268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:39:48.552582    7268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712
	W0604 15:39:49.540272    7268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:49.540272    7268 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:49.877869    7268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712
	W0604 15:39:50.906344    7268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:50.906344    7268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: (1.0282281s)
	W0604 15:39:50.906344    7268 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	
	W0604 15:39:50.906344    7268 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:50.917836    7268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:39:50.923356    7268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712
	W0604 15:39:51.955933    7268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:51.955933    7268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: (1.0315486s)
	I0604 15:39:51.955933    7268 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:52.188944    7268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712
	W0604 15:39:53.227796    7268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:53.227796    7268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: (1.0388417s)
	W0604 15:39:53.227796    7268 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	
	W0604 15:39:53.227796    7268 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:53.227796    7268 start.go:134] duration metric: createHost completed in 12.7744074s
	I0604 15:39:53.241315    7268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:39:53.248542    7268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712
	W0604 15:39:54.302520    7268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:54.302653    7268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: (1.0537659s)
	I0604 15:39:54.302653    7268 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:54.561206    7268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712
	W0604 15:39:55.597560    7268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:55.597808    7268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: (1.0363429s)
	W0604 15:39:55.597906    7268 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	
	W0604 15:39:55.597906    7268 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:55.608568    7268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:39:55.613820    7268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712
	W0604 15:39:56.676709    7268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:56.676739    7268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: (1.0627264s)
	I0604 15:39:56.677081    7268 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:56.883947    7268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712
	W0604 15:39:57.956854    7268 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712 returned with exit code 1
	I0604 15:39:57.957124    7268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: (1.0728952s)
	W0604 15:39:57.957571    7268 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	
	W0604 15:39:57.957691    7268 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220604153841-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220604153841-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	I0604 15:39:57.957691    7268 fix.go:57] fixHost completed within 44.720466s
	I0604 15:39:57.957807    7268 start.go:81] releasing machines lock for "ingress-addon-legacy-20220604153841-5712", held for 44.7205819s
	W0604 15:39:57.958421    7268 out.go:239] * Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20220604153841-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220604153841-5712 container: docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220604153841-5712: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20220604153841-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220604153841-5712 container: docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220604153841-5712: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712: read-only file system
	
	I0604 15:39:57.967243    7268 out.go:177] 
	W0604 15:39:57.969605    7268 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220604153841-5712 container: docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220604153841-5712: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220604153841-5712 container: docker volume create ingress-addon-legacy-20220604153841-5712 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220604153841-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220604153841-5712: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220604153841-5712: read-only file system
	
	W0604 15:39:57.969605    7268 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 15:39:57.969605    7268 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 15:39:57.973634    7268 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220604153841-5712 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker" : exit status 60
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (76.18s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (7.03s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220604153841-5712 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220604153841-5712 addons enable ingress --alsologtostderr -v=5: exit status 10 (3.1000132s)

                                                
                                                
-- stdout --
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:39:58.405468    6980 out.go:296] Setting OutFile to fd 928 ...
	I0604 15:39:58.475083    6980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:39:58.475083    6980 out.go:309] Setting ErrFile to fd 864...
	I0604 15:39:58.475083    6980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:39:58.488087    6980 config.go:178] Loaded profile config "ingress-addon-legacy-20220604153841-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0604 15:39:58.488154    6980 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220604153841-5712"
	I0604 15:39:58.488255    6980 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220604153841-5712"
	I0604 15:39:58.489214    6980 host.go:66] Checking if "ingress-addon-legacy-20220604153841-5712" exists ...
	I0604 15:39:58.503140    6980 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}
	W0604 15:40:00.923221    6980 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:40:00.923302    6980 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: (2.4198558s)
	W0604 15:40:00.923411    6980 host.go:54] host status for "ingress-addon-legacy-20220604153841-5712" returned error: state: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712
	W0604 15:40:00.923480    6980 addons.go:202] "ingress-addon-legacy-20220604153841-5712" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0604 15:40:00.923538    6980 addons.go:386] Verifying addon ingress=true in "ingress-addon-legacy-20220604153841-5712"
	I0604 15:40:00.931275    6980 out.go:177] * Verifying ingress addon...
	W0604 15:40:00.934414    6980 loader.go:221] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 15:40:00.936904    6980 out.go:177] 
	W0604 15:40:00.938980    6980 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220604153841-5712" does not exist: client config: context "ingress-addon-legacy-20220604153841-5712" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220604153841-5712" does not exist: client config: context "ingress-addon-legacy-20220604153841-5712" does not exist]
	W0604 15:40:00.938980    6980 out.go:239] * 
	* 
	W0604 15:40:01.201044    6980 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_addons_765a40db962dd8139438d8c956b5e6e825316d2d_5.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_addons_765a40db962dd8139438d8c956b5e6e825316d2d_5.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 15:40:01.204209    6980 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220604153841-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect ingress-addon-legacy-20220604153841-5712: exit status 1 (1.0685457s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: ingress-addon-legacy-20220604153841-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220604153841-5712 -n ingress-addon-legacy-20220604153841-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220604153841-5712 -n ingress-addon-legacy-20220604153841-5712: exit status 7 (2.846391s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:40:05.143473    8508 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220604153841-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (7.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (3.89s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:156: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220604153841-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect ingress-addon-legacy-20220604153841-5712: exit status 1 (1.1145626s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: ingress-addon-legacy-20220604153841-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220604153841-5712 -n ingress-addon-legacy-20220604153841-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220604153841-5712 -n ingress-addon-legacy-20220604153841-5712: exit status 7 (2.7647842s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:40:11.872626    9100 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20220604153841-5712": docker container inspect ingress-addon-legacy-20220604153841-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220604153841-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220604153841-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (3.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (73.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20220604154019-5712 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-20220604154019-5712 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: exit status 60 (1m13.7952799s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f54df687-89a6-4888-9b35-06849262b6dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-20220604154019-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"85009fb3-aaf2-4e0c-941c-901211cba049","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"f4eb9cad-8757-4dde-b4ae-30d89fe7fe7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"e0f7da0a-6a13-467f-8e20-3baedf6b7549","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14123"}}
	{"specversion":"1.0","id":"f8f71e1a-26d2-4ed0-88d5-a9ada027e829","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ef75a0e0-da65-4730-b5f1-c5e37482ecff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c45dfe02-175f-40ac-b8d4-2504a3986394","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"21ea1044-5986-47c8-abaa-1d04ccc1b520","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node json-output-20220604154019-5712 in cluster json-output-20220604154019-5712","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"68ded9c2-aaf5-47a4-950c-49e6a2217e56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"10d94d83-bd79-4d9a-b18e-3b7aeaab4762","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"82e15d98-a4a6-45f8-81a2-f64578257109","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220604154019-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220604154019-5712: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network b2e395e96c1f4ad10f0bb8df4b2e0d6ec5093ab6eca08a1d93473ea6374bd8c8 (br-b2e395e96c1f): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4"}}
	{"specversion":"1.0","id":"a5c49827-592e-4db5-a9ae-8bd1a5c0153a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for json-output-20220604154019-5712 container: docker volume create json-output-20220604154019-5712 --label name.minikube.sigs.k8s.io=json-output-20220604154019-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220604154019-5712: error while creating volume root path '/var/lib/docker/volumes/json-output-20220604154019-5712': mkdir /var/lib/docker/volumes/json-output-20220604154019-5712: read-only file system"}}
	{"specversion":"1.0","id":"643c5ba7-8752-45a6-9e8a-9f5a34ba916a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"docker \"json-output-20220604154019-5712\" container is missing, will recreate.","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"340e3778-82b4-4fa6-9aea-8676ada188c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cae65128-7f3f-49c1-82bf-bd6f602c1242","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220604154019-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220604154019-5712: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 6fdbae0def441c50b591562ca2d78486d92d679b2e9f21c237a7c86e2995bbe0 (br-6fdbae0def44): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4"}}
	{"specversion":"1.0","id":"ac897431-1fc0-4854-832e-c9e619ed6ed4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start docker container. Running \"minikube delete -p json-output-20220604154019-5712\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220604154019-5712 container: docker volume create json-output-20220604154019-5712 --label name.minikube.sigs.k8s.io=json-output-20220604154019-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220604154019-5712: error while creating volume root path '/var/lib/docker/volumes/json-output-20220604154019-5712': mkdir /var/lib/docker/volumes/json-output-20220604154019-5712: read-only file system"}}
	{"specversion":"1.0","id":"7962ac35-7f6c-4548-bf6b-4112544dcdc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Restart Docker","exitcode":"60","issues":"https://github.com/kubernetes/minikube/issues/6825","message":"Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220604154019-5712 container: docker volume create json-output-20220604154019-5712 --label name.minikube.sigs.k8s.io=json-output-20220604154019-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220604154019-5712: error while creating volume root path '/var/lib/docker/volumes/json-output-20220604154019-5712': mkdir /var/lib/docker/volumes/json-output-20220604154019-5712: read-only file system","name":"PR_DOCKER_READONLY_VOL","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:40:34.129905    8488 network_create.go:104] error while trying to create docker network json-output-20220604154019-5712 192.168.49.0/24: create docker network json-output-20220604154019-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220604154019-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b2e395e96c1f4ad10f0bb8df4b2e0d6ec5093ab6eca08a1d93473ea6374bd8c8 (br-b2e395e96c1f): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	E0604 15:41:20.247429    8488 network_create.go:104] error while trying to create docker network json-output-20220604154019-5712 192.168.58.0/24: create docker network json-output-20220604154019-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220604154019-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6fdbae0def441c50b591562ca2d78486d92d679b2e9f21c237a7c86e2995bbe0 (br-6fdbae0def44): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe start -p json-output-20220604154019-5712 --output=json --user=testUser --memory=2200 --wait=true --driver=docker": exit status 60
--- FAIL: TestJSONOutput/start/Command (73.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 8 has already been assigned to another step:
Creating docker container (CPUs=2, Memory=2200MB) ...
Cannot use for:
docker "json-output-20220604154019-5712" container is missing, will recreate.
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f54df687-89a6-4888-9b35-06849262b6dc
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20220604154019-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 85009fb3-aaf2-4e0c-941c-901211cba049
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f4eb9cad-8757-4dde-b4ae-30d89fe7fe7f
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e0f7da0a-6a13-467f-8e20-3baedf6b7549
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=14123"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f8f71e1a-26d2-4ed0-88d5-a9ada027e829
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ef75a0e0-da65-4730-b5f1-c5e37482ecff
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: c45dfe02-175f-40ac-b8d4-2504a3986394
datacontenttype: application/json
Data,
{
"message": "Using Docker Desktop driver with the root privilege"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 21ea1044-5986-47c8-abaa-1d04ccc1b520
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20220604154019-5712 in cluster json-output-20220604154019-5712",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 68ded9c2-aaf5-47a4-950c-49e6a2217e56
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 10d94d83-bd79-4d9a-b18e-3b7aeaab4762
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: 82e15d98-a4a6-45f8-81a2-f64578257109
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220604154019-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220604154019-5712: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network b2e395e96c1f4ad10f0bb8df4b2e0d6ec5093ab6eca08a1d93473ea6374bd8c8 (br-b2e395e96c1f): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: a5c49827-592e-4db5-a9ae-8bd1a5c0153a
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for json-output-20220604154019-5712 container: docker volume create json-output-20220604154019-5712 --label name.minikube.sigs.k8s.io=json-output-20220604154019-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220604154019-5712: error while creating volume root path '/var/lib/docker/volumes/json-output-20220604154019-5712': mkdir /var/lib/docker/volumes/json-output-20220604154019-5712: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 643c5ba7-8752-45a6-9e8a-9f5a34ba916a
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20220604154019-5712\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 340e3778-82b4-4fa6-9aea-8676ada188c8
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: cae65128-7f3f-49c1-82bf-bd6f602c1242
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220604154019-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220604154019-5712: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 6fdbae0def441c50b591562ca2d78486d92d679b2e9f21c237a7c86e2995bbe0 (br-6fdbae0def44): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: ac897431-1fc0-4854-832e-c9e619ed6ed4
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20220604154019-5712\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220604154019-5712 container: docker volume create json-output-20220604154019-5712 --label name.minikube.sigs.k8s.io=json-output-20220604154019-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220604154019-5712: error while creating volume root path '/var/lib/docker/volumes/json-output-20220604154019-5712': mkdir /var/lib/docker/volumes/json-output-20220604154019-5712: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 7962ac35-7f6c-4548-bf6b-4112544dcdc0
datacontenttype: application/json
Data,
{
"advice": "Restart Docker",
"exitcode": "60",
"issues": "https://github.com/kubernetes/minikube/issues/6825",
"message": "Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220604154019-5712 container: docker volume create json-output-20220604154019-5712 --label name.minikube.sigs.k8s.io=json-output-20220604154019-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220604154019-5712: error while creating volume root path '/var/lib/docker/volumes/json-output-20220604154019-5712': mkdir /var/lib/docker/volumes/json-output-20220604154019-5712: read-only file system",
"name": "PR_DOCKER_READONLY_VOL",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f54df687-89a6-4888-9b35-06849262b6dc
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20220604154019-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 85009fb3-aaf2-4e0c-941c-901211cba049
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f4eb9cad-8757-4dde-b4ae-30d89fe7fe7f
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e0f7da0a-6a13-467f-8e20-3baedf6b7549
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=14123"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f8f71e1a-26d2-4ed0-88d5-a9ada027e829
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ef75a0e0-da65-4730-b5f1-c5e37482ecff
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: c45dfe02-175f-40ac-b8d4-2504a3986394
datacontenttype: application/json
Data,
{
"message": "Using Docker Desktop driver with the root privilege"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 21ea1044-5986-47c8-abaa-1d04ccc1b520
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20220604154019-5712 in cluster json-output-20220604154019-5712",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 68ded9c2-aaf5-47a4-950c-49e6a2217e56
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 10d94d83-bd79-4d9a-b18e-3b7aeaab4762
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: 82e15d98-a4a6-45f8-81a2-f64578257109
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220604154019-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220604154019-5712: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network b2e395e96c1f4ad10f0bb8df4b2e0d6ec5093ab6eca08a1d93473ea6374bd8c8 (br-b2e395e96c1f): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: a5c49827-592e-4db5-a9ae-8bd1a5c0153a
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for json-output-20220604154019-5712 container: docker volume create json-output-20220604154019-5712 --label name.minikube.sigs.k8s.io=json-output-20220604154019-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220604154019-5712: error while creating volume root path '/var/lib/docker/volumes/json-output-20220604154019-5712': mkdir /var/lib/docker/volumes/json-output-20220604154019-5712: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 643c5ba7-8752-45a6-9e8a-9f5a34ba916a
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20220604154019-5712\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 340e3778-82b4-4fa6-9aea-8676ada188c8
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: cae65128-7f3f-49c1-82bf-bd6f602c1242
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220604154019-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220604154019-5712: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 6fdbae0def441c50b591562ca2d78486d92d679b2e9f21c237a7c86e2995bbe0 (br-6fdbae0def44): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: ac897431-1fc0-4854-832e-c9e619ed6ed4
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20220604154019-5712\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220604154019-5712 container: docker volume create json-output-20220604154019-5712 --label name.minikube.sigs.k8s.io=json-output-20220604154019-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220604154019-5712: error while creating volume root path '/var/lib/docker/volumes/json-output-20220604154019-5712': mkdir /var/lib/docker/volumes/json-output-20220604154019-5712: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 7962ac35-7f6c-4548-bf6b-4112544dcdc0
datacontenttype: application/json
Data,
{
"advice": "Restart Docker",
"exitcode": "60",
"issues": "https://github.com/kubernetes/minikube/issues/6825",
"message": "Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220604154019-5712 container: docker volume create json-output-20220604154019-5712 --label name.minikube.sigs.k8s.io=json-output-20220604154019-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220604154019-5712: error while creating volume root path '/var/lib/docker/volumes/json-output-20220604154019-5712': mkdir /var/lib/docker/volumes/json-output-20220604154019-5712: read-only file system",
"name": "PR_DOCKER_READONLY_VOL",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (3.06s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20220604154019-5712 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p json-output-20220604154019-5712 --output=json --user=testUser: exit status 80 (3.0638078s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b1e9f0f6-13c5-4a3a-ac9a-6102475257df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"state: unknown state \"json-output-20220604154019-5712\": docker container inspect json-output-20220604154019-5712 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220604154019-5712","name":"GUEST_STATUS","url":""}}
	{"specversion":"1.0","id":"23ed3e11-5534-4a89-a9c7-25dc38e937be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                      │\n│    If the above advice does not help, please let us know:                                                            │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                          │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │\n│    Please also attach the following file to the GitHub issue:                                                        │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_12.log    │\n│                                                                                                                      │\n╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe pause -p json-output-20220604154019-5712 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (3.06s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (3.07s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20220604154019-5712 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe unpause -p json-output-20220604154019-5712 --output=json --user=testUser: exit status 80 (3.0698204s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "json-output-20220604154019-5712": docker container inspect json-output-20220604154019-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: json-output-20220604154019-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_unpause_00b12d9cedab4ae1bb930a621bdee2ada68dbd98_9.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe unpause -p json-output-20220604154019-5712 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (3.07s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (21.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20220604154019-5712 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p json-output-20220604154019-5712 --output=json --user=testUser: exit status 82 (21.9451839s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7364df3a-e16c-4ac8-989a-967f9b412162","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220604154019-5712\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"76578f42-13e9-4f80-9b16-197f3f3260bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220604154019-5712\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"8d07dd41-1b5d-4c0f-886a-de771881a6cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220604154019-5712\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"84518b1a-4f55-42c4-8bd6-d181d6becfc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220604154019-5712\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"9232b5e0-46a6-4d4d-b6c5-7ba6b20d1439","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220604154019-5712\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"765ede80-a78f-4e70-aa48-993430562d34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220604154019-5712\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"de25276f-136c-4c85-92c0-730931ce0ec9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"82","issues":"","message":"docker container inspect json-output-20220604154019-5712 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220604154019-5712","name":"GUEST_STOP_TIMEOUT","url":""}}
	{"specversion":"1.0","id":"006e3560-395e-4874-bbec-7d4d4acf45e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                     │\n│    If the above advice does not help, please let us know:                                                           │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                         │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │\n│    Please also attach the following file to the GitHub issue:                                                       │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │\n│                                                                                                                     │\n╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:41:44.999978    7152 daemonize_windows.go:38] error terminating scheduled stop for profile json-output-20220604154019-5712: stopping schedule-stop service for profile json-output-20220604154019-5712: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "json-output-20220604154019-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" json-output-20220604154019-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: json-output-20220604154019-5712

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe stop -p json-output-20220604154019-5712 --output=json --user=testUser": exit status 82
--- FAIL: TestJSONOutput/stop/Command (21.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
json_output_test.go:114: step 0 has already been assigned to another step:
Stopping node "json-output-20220604154019-5712"  ...
Cannot use for:
Stopping node "json-output-20220604154019-5712"  ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7364df3a-e16c-4ac8-989a-967f9b412162
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220604154019-5712\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 76578f42-13e9-4f80-9b16-197f3f3260bc
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220604154019-5712\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8d07dd41-1b5d-4c0f-886a-de771881a6cd
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220604154019-5712\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 84518b1a-4f55-42c4-8bd6-d181d6becfc8
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220604154019-5712\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9232b5e0-46a6-4d4d-b6c5-7ba6b20d1439
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220604154019-5712\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 765ede80-a78f-4e70-aa48-993430562d34
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220604154019-5712\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: de25276f-136c-4c85-92c0-730931ce0ec9
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20220604154019-5712 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220604154019-5712",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 006e3560-395e-4874-bbec-7d4d4acf45e4
datacontenttype: application/json
Data,
{
"message": "╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                     │\n│    If the above advice does not help, please let us know:                                                           │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                         │\n│                                                                                                                     │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │\n│    Please al
so attach the following file to the GitHub issue:                                                       │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │\n│                                                                                                                     │\n╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7364df3a-e16c-4ac8-989a-967f9b412162
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220604154019-5712\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 76578f42-13e9-4f80-9b16-197f3f3260bc
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220604154019-5712\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8d07dd41-1b5d-4c0f-886a-de771881a6cd
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220604154019-5712\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 84518b1a-4f55-42c4-8bd6-d181d6becfc8
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220604154019-5712\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9232b5e0-46a6-4d4d-b6c5-7ba6b20d1439
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220604154019-5712\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 765ede80-a78f-4e70-aa48-993430562d34
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220604154019-5712\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: de25276f-136c-4c85-92c0-730931ce0ec9
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20220604154019-5712 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220604154019-5712",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 006e3560-395e-4874-bbec-7d4d4acf45e4
datacontenttype: application/json
Data,
{
"message": "╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                     │\n│    If the above advice does not help, please let us know:                                                           │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                         │\n│                                                                                                                     │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │\n│    Please al
so attach the following file to the GitHub issue:                                                       │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │\n│                                                                                                                     │\n╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (246.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220604154217-5712 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220604154217-5712 --network=: (3m25.3977518s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0223689s)
kic_custom_network_test.go:127: docker-network-20220604154217-5712 network is not listed by [[docker network ls --format {{.Name}}]]: 
-- stdout --
	bridge
	host
	none

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "docker-network-20220604154217-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220604154217-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220604154217-5712: (39.7382634s)
--- FAIL: TestKicCustomNetwork/create_custom_network (246.18s)

                                                
                                    
x
+
TestKicExistingNetwork (4.12s)

                                                
                                                
=== RUN   TestKicExistingNetwork
E0604 15:50:16.607658    5712 network_create.go:104] error while trying to create docker network existing-network 192.168.49.0/24: create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 46c541689e40e81202d0cdacd7931b2c6a76ecf4465b78faa2512aa4fa0de0ae (br-46c541689e40): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
kic_custom_network_test.go:78: error creating network: un-retryable: create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 46c541689e40e81202d0cdacd7931b2c6a76ecf4465b78faa2512aa4fa0de0ae (br-46c541689e40): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
--- FAIL: TestKicExistingNetwork (4.12s)

                                                
                                    
x
+
TestKicCustomSubnet (235.58s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-20220604155016-5712 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-20220604155016-5712 --subnet=192.168.60.0/24: (3m15.4892605s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220604155016-5712 --format "{{(index .IPAM.Config 0).Subnet}}"
kic_custom_network_test.go:133: (dbg) Non-zero exit: docker network inspect custom-subnet-20220604155016-5712 --format "{{(index .IPAM.Config 0).Subnet}}": exit status 1 (1.0569294s)

                                                
                                                
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such network: custom-subnet-20220604155016-5712

                                                
                                                
** /stderr **
kic_custom_network_test.go:135: docker network inspect custom-subnet-20220604155016-5712 --format "{{(index .IPAM.Config 0).Subnet}}" failed: exit status 1

                                                
                                                
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such network: custom-subnet-20220604155016-5712

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "custom-subnet-20220604155016-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-20220604155016-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-20220604155016-5712: (39.0173725s)
--- FAIL: TestKicCustomSubnet (235.58s)

                                                
                                    
x
+
TestMinikubeProfile (94.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-20220604155412-5712 --driver=docker
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p first-20220604155412-5712 --driver=docker: exit status 60 (1m14.3656976s)

                                                
                                                
-- stdout --
	* [first-20220604155412-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node first-20220604155412-5712 in cluster first-20220604155412-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "first-20220604155412-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:54:26.919110    8756 network_create.go:104] error while trying to create docker network first-20220604155412-5712 192.168.49.0/24: create docker network first-20220604155412-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true first-20220604155412-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f9fedf5ce62ee2c7eec0b1786029988d9aa278000b2f709587985d2f9e377dc6 (br-f9fedf5ce62e): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network first-20220604155412-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true first-20220604155412-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f9fedf5ce62ee2c7eec0b1786029988d9aa278000b2f709587985d2f9e377dc6 (br-f9fedf5ce62e): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for first-20220604155412-5712 container: docker volume create first-20220604155412-5712 --label name.minikube.sigs.k8s.io=first-20220604155412-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create first-20220604155412-5712: error while creating volume root path '/var/lib/docker/volumes/first-20220604155412-5712': mkdir /var/lib/docker/volumes/first-20220604155412-5712: read-only file system
	
	E0604 15:55:13.342676    8756 network_create.go:104] error while trying to create docker network first-20220604155412-5712 192.168.58.0/24: create docker network first-20220604155412-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true first-20220604155412-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 60a24f8060d53f928ff833394084524ae975f6e33c583a397a54d0bdfc78d046 (br-60a24f8060d5): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network first-20220604155412-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true first-20220604155412-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 60a24f8060d53f928ff833394084524ae975f6e33c583a397a54d0bdfc78d046 (br-60a24f8060d5): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p first-20220604155412-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for first-20220604155412-5712 container: docker volume create first-20220604155412-5712 --label name.minikube.sigs.k8s.io=first-20220604155412-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create first-20220604155412-5712: error while creating volume root path '/var/lib/docker/volumes/first-20220604155412-5712': mkdir /var/lib/docker/volumes/first-20220604155412-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for first-20220604155412-5712 container: docker volume create first-20220604155412-5712 --label name.minikube.sigs.k8s.io=first-20220604155412-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create first-20220604155412-5712: error while creating volume root path '/var/lib/docker/volumes/first-20220604155412-5712': mkdir /var/lib/docker/volumes/first-20220604155412-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-windows-amd64.exe start -p first-20220604155412-5712 --driver=docker": exit status 60
panic.go:482: *** TestMinikubeProfile FAILED at 2022-06-04 15:55:26.8878721 +0000 GMT m=+2132.093117901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect second-20220604155412-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect second-20220604155412-5712: exit status 1 (1.0776914s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: second-20220604155412-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p second-20220604155412-5712 -n second-20220604155412-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p second-20220604155412-5712 -n second-20220604155412-5712: exit status 85 (344.1857ms)

                                                
                                                
-- stdout --
	* Profile "second-20220604155412-5712" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-20220604155412-5712"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-20220604155412-5712" host is not running, skipping log retrieval (state="* Profile \"second-20220604155412-5712\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-20220604155412-5712\"")
helpers_test.go:175: Cleaning up "second-20220604155412-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-20220604155412-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-20220604155412-5712: (6.9201481s)
panic.go:482: *** TestMinikubeProfile FAILED at 2022-06-04 15:55:35.2425121 +0000 GMT m=+2140.447668501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect first-20220604155412-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect first-20220604155412-5712: exit status 1 (1.073411s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: first-20220604155412-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p first-20220604155412-5712 -n first-20220604155412-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p first-20220604155412-5712 -n first-20220604155412-5712: exit status 7 (2.7597089s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:55:39.053288    1076 status.go:247] status error: host: state: unknown state "first-20220604155412-5712": docker container inspect first-20220604155412-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: first-20220604155412-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-20220604155412-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "first-20220604155412-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-20220604155412-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-20220604155412-5712: (7.9667468s)
--- FAIL: TestMinikubeProfile (94.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-20220604155547-5712 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-1-20220604155547-5712 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: exit status 60 (1m14.034826s)

                                                
                                                
-- stdout --
	* [mount-start-1-20220604155547-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting minikube without Kubernetes mount-start-1-20220604155547-5712 in cluster mount-start-1-20220604155547-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-1-20220604155547-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:56:01.613226    2256 network_create.go:104] error while trying to create docker network mount-start-1-20220604155547-5712 192.168.49.0/24: create docker network mount-start-1-20220604155547-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true mount-start-1-20220604155547-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fd463e946d7392d3f4c501d43edbded38fa2552c2eb576478ac5c44ec5e9796a (br-fd463e946d73): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network mount-start-1-20220604155547-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true mount-start-1-20220604155547-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fd463e946d7392d3f4c501d43edbded38fa2552c2eb576478ac5c44ec5e9796a (br-fd463e946d73): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for mount-start-1-20220604155547-5712 container: docker volume create mount-start-1-20220604155547-5712 --label name.minikube.sigs.k8s.io=mount-start-1-20220604155547-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create mount-start-1-20220604155547-5712: error while creating volume root path '/var/lib/docker/volumes/mount-start-1-20220604155547-5712': mkdir /var/lib/docker/volumes/mount-start-1-20220604155547-5712: read-only file system
	
	E0604 15:56:47.839541    2256 network_create.go:104] error while trying to create docker network mount-start-1-20220604155547-5712 192.168.58.0/24: create docker network mount-start-1-20220604155547-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true mount-start-1-20220604155547-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b76a13ced1a9cc5463f64019e216fa5a9ca3a9a359395090ba288ec929e06b73 (br-b76a13ced1a9): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network mount-start-1-20220604155547-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true mount-start-1-20220604155547-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b76a13ced1a9cc5463f64019e216fa5a9ca3a9a359395090ba288ec929e06b73 (br-b76a13ced1a9): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p mount-start-1-20220604155547-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for mount-start-1-20220604155547-5712 container: docker volume create mount-start-1-20220604155547-5712 --label name.minikube.sigs.k8s.io=mount-start-1-20220604155547-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create mount-start-1-20220604155547-5712: error while creating volume root path '/var/lib/docker/volumes/mount-start-1-20220604155547-5712': mkdir /var/lib/docker/volumes/mount-start-1-20220604155547-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for mount-start-1-20220604155547-5712 container: docker volume create mount-start-1-20220604155547-5712 --label name.minikube.sigs.k8s.io=mount-start-1-20220604155547-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create mount-start-1-20220604155547-5712: error while creating volume root path '/var/lib/docker/volumes/mount-start-1-20220604155547-5712': mkdir /var/lib/docker/volumes/mount-start-1-20220604155547-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p mount-start-1-20220604155547-5712 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/StartWithMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-20220604155547-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect mount-start-1-20220604155547-5712: exit status 1 (1.0748172s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: mount-start-1-20220604155547-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-1-20220604155547-5712 -n mount-start-1-20220604155547-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-1-20220604155547-5712 -n mount-start-1-20220604155547-5712: exit status 7 (2.8787159s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:57:05.017616    6420 status.go:247] status error: host: state: unknown state "mount-start-1-20220604155547-5712": docker container inspect mount-start-1-20220604155547-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20220604155547-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-20220604155547-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/StartWithMountFirst (78.00s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
multinode_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: exit status 60 (1m14.1440875s)

                                                
                                                
-- stdout --
	* [multinode-20220604155719-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node multinode-20220604155719-5712 in cluster multinode-20220604155719-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20220604155719-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:57:20.091145    3120 out.go:296] Setting OutFile to fd 928 ...
	I0604 15:57:20.144532    3120 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:57:20.144532    3120 out.go:309] Setting ErrFile to fd 624...
	I0604 15:57:20.144590    3120 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:57:20.154829    3120 out.go:303] Setting JSON to false
	I0604 15:57:20.157057    3120 start.go:115] hostinfo: {"hostname":"minikube2","uptime":9312,"bootTime":1654348928,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 15:57:20.158062    3120 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 15:57:20.164870    3120 out.go:177] * [multinode-20220604155719-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 15:57:20.168721    3120 notify.go:193] Checking for updates...
	I0604 15:57:20.174016    3120 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 15:57:20.176693    3120 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 15:57:20.178745    3120 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 15:57:20.180778    3120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 15:57:20.183544    3120 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 15:57:22.696683    3120 docker.go:137] docker version: linux-20.10.16
	I0604 15:57:22.705068    3120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:57:24.632258    3120 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9268969s)
	I0604 15:57:24.633167    3120 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 15:57:23.6732395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:57:24.637297    3120 out.go:177] * Using the docker driver based on user configuration
	I0604 15:57:24.640355    3120 start.go:284] selected driver: docker
	I0604 15:57:24.640355    3120 start.go:806] validating driver "docker" against <nil>
	I0604 15:57:24.640355    3120 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 15:57:24.758349    3120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:57:26.704650    3120 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9462806s)
	I0604 15:57:26.704650    3120 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 15:57:25.7489091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:57:26.704650    3120 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 15:57:26.705674    3120 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 15:57:26.708645    3120 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 15:57:26.711041    3120 cni.go:95] Creating CNI manager for ""
	I0604 15:57:26.711041    3120 cni.go:156] 0 nodes found, recommending kindnet
	I0604 15:57:26.711248    3120 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0604 15:57:26.711276    3120 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0604 15:57:26.711276    3120 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0604 15:57:26.711276    3120 start_flags.go:306] config:
	{Name:multinode-20220604155719-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220604155719-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 15:57:26.714333    3120 out.go:177] * Starting control plane node multinode-20220604155719-5712 in cluster multinode-20220604155719-5712
	I0604 15:57:26.717970    3120 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 15:57:26.720997    3120 out.go:177] * Pulling base image ...
	I0604 15:57:26.722580    3120 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 15:57:26.722580    3120 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 15:57:26.723348    3120 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 15:57:26.723348    3120 cache.go:57] Caching tarball of preloaded images
	I0604 15:57:26.723348    3120 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 15:57:26.723966    3120 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 15:57:26.724199    3120 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220604155719-5712\config.json ...
	I0604 15:57:26.724199    3120 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220604155719-5712\config.json: {Name:mk316dc6ebafb18c8b70744c2dea68afaebe6ff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 15:57:27.750355    3120 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 15:57:27.750355    3120 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:57:27.750355    3120 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:57:27.750355    3120 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 15:57:27.750355    3120 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 15:57:27.750355    3120 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 15:57:27.750355    3120 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 15:57:27.751231    3120 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 15:57:27.751231    3120 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:57:29.994460    3120 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-3617466666: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-3617466666: read-only file system"}
	I0604 15:57:29.994460    3120 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 15:57:29.994460    3120 cache.go:206] Successfully downloaded all kic artifacts
	I0604 15:57:29.994460    3120 start.go:352] acquiring machines lock for multinode-20220604155719-5712: {Name:mk7df06d9ba91b0f06c5e69474f69126e3a597c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 15:57:29.995178    3120 start.go:356] acquired machines lock for "multinode-20220604155719-5712" in 682.2µs
	I0604 15:57:29.995252    3120 start.go:91] Provisioning new machine with config: &{Name:multinode-20220604155719-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220604155719-5712 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 15:57:29.995252    3120 start.go:131] createHost starting for "" (driver="docker")
	I0604 15:57:29.999204    3120 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 15:57:29.999701    3120 start.go:165] libmachine.API.Create for "multinode-20220604155719-5712" (driver="docker")
	I0604 15:57:29.999794    3120 client.go:168] LocalClient.Create starting
	I0604 15:57:30.000150    3120 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 15:57:30.000150    3120 main.go:134] libmachine: Decoding PEM data...
	I0604 15:57:30.000150    3120 main.go:134] libmachine: Parsing certificate...
	I0604 15:57:30.000677    3120 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 15:57:30.000955    3120 main.go:134] libmachine: Decoding PEM data...
	I0604 15:57:30.000955    3120 main.go:134] libmachine: Parsing certificate...
	I0604 15:57:30.010419    3120 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:57:31.035231    3120 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:57:31.035231    3120 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0245877s)
	I0604 15:57:31.043252    3120 network_create.go:272] running [docker network inspect multinode-20220604155719-5712] to gather additional debugging logs...
	I0604 15:57:31.043252    3120 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712
	W0604 15:57:32.062608    3120 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 returned with exit code 1
	I0604 15:57:32.062608    3120 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712: (1.0193445s)
	I0604 15:57:32.062608    3120 network_create.go:275] error running [docker network inspect multinode-20220604155719-5712]: docker network inspect multinode-20220604155719-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220604155719-5712
	I0604 15:57:32.062608    3120 network_create.go:277] output of [docker network inspect multinode-20220604155719-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220604155719-5712
	
	** /stderr **
	I0604 15:57:32.072440    3120 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 15:57:33.115160    3120 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0424676s)
	I0604 15:57:33.134808    3120 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006328] misses:0}
	I0604 15:57:33.135446    3120 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:57:33.135527    3120 network_create.go:115] attempt to create docker network multinode-20220604155719-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 15:57:33.142765    3120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712
	W0604 15:57:34.224066    3120 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712 returned with exit code 1
	I0604 15:57:34.224066    3120 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: (1.0812887s)
	E0604 15:57:34.224066    3120 network_create.go:104] error while trying to create docker network multinode-20220604155719-5712 192.168.49.0/24: create docker network multinode-20220604155719-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bfe61f75b6c6496b199f44e1c93798ed5bf5e5ba50ad0f31962e924f0e2a0c6b (br-bfe61f75b6c6): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 15:57:34.224066    3120 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bfe61f75b6c6496b199f44e1c93798ed5bf5e5ba50ad0f31962e924f0e2a0c6b (br-bfe61f75b6c6): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bfe61f75b6c6496b199f44e1c93798ed5bf5e5ba50ad0f31962e924f0e2a0c6b (br-bfe61f75b6c6): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 15:57:34.239558    3120 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 15:57:35.285025    3120 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0453138s)
	I0604 15:57:35.295023    3120 cli_runner.go:164] Run: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 15:57:36.303401    3120 cli_runner.go:211] docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 15:57:36.303401    3120 cli_runner.go:217] Completed: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0083677s)
	I0604 15:57:36.303401    3120 client.go:171] LocalClient.Create took 6.3035393s
	I0604 15:57:38.322257    3120 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:57:38.329763    3120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 15:57:39.346199    3120 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 15:57:39.346199    3120 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.016425s)
	I0604 15:57:39.346199    3120 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:39.636498    3120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 15:57:40.660978    3120 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 15:57:40.660978    3120 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0242164s)
	W0604 15:57:40.660978    3120 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 15:57:40.660978    3120 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:40.671955    3120 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:57:40.679096    3120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 15:57:41.691794    3120 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 15:57:41.691794    3120 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0125313s)
	I0604 15:57:41.692050    3120 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:41.990688    3120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 15:57:43.010758    3120 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 15:57:43.010758    3120 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0200589s)
	W0604 15:57:43.010758    3120 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 15:57:43.010758    3120 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:43.010758    3120 start.go:134] duration metric: createHost completed in 13.0153659s
	I0604 15:57:43.010758    3120 start.go:81] releasing machines lock for "multinode-20220604155719-5712", held for 13.0154016s
	W0604 15:57:43.011411    3120 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	I0604 15:57:43.026119    3120 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:57:44.062531    3120 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:57:44.062673    3120 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0362722s)
	I0604 15:57:44.062673    3120 delete.go:82] Unable to get host status for multinode-20220604155719-5712, assuming it has already been deleted: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	W0604 15:57:44.062673    3120 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	I0604 15:57:44.062673    3120 start.go:614] Will try again in 5 seconds ...
	I0604 15:57:49.075797    3120 start.go:352] acquiring machines lock for multinode-20220604155719-5712: {Name:mk7df06d9ba91b0f06c5e69474f69126e3a597c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 15:57:49.076127    3120 start.go:356] acquired machines lock for "multinode-20220604155719-5712" in 153.2µs
	I0604 15:57:49.076127    3120 start.go:94] Skipping create...Using existing machine configuration
	I0604 15:57:49.076127    3120 fix.go:55] fixHost starting: 
	I0604 15:57:49.091538    3120 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:57:50.122907    3120 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:57:50.123096    3120 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.031299s)
	I0604 15:57:50.123125    3120 fix.go:103] recreateIfNeeded on multinode-20220604155719-5712: state= err=unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:50.123125    3120 fix.go:108] machineExists: false. err=machine does not exist
	I0604 15:57:50.129915    3120 out.go:177] * docker "multinode-20220604155719-5712" container is missing, will recreate.
	I0604 15:57:50.133087    3120 delete.go:124] DEMOLISHING multinode-20220604155719-5712 ...
	I0604 15:57:50.145123    3120 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:57:51.151925    3120 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:57:51.151925    3120 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0066652s)
	W0604 15:57:51.152025    3120 stop.go:75] unable to get state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:51.152196    3120 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:51.166112    3120 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:57:52.181207    3120 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:57:52.181207    3120 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0150837s)
	I0604 15:57:52.181207    3120 delete.go:82] Unable to get host status for multinode-20220604155719-5712, assuming it has already been deleted: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:52.189723    3120 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220604155719-5712
	W0604 15:57:53.203836    3120 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220604155719-5712 returned with exit code 1
	I0604 15:57:53.203836    3120 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220604155719-5712: (1.0137773s)
	I0604 15:57:53.203836    3120 kic.go:356] could not find the container multinode-20220604155719-5712 to remove it. will try anyways
	I0604 15:57:53.212089    3120 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:57:54.230961    3120 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:57:54.230961    3120 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0186848s)
	W0604 15:57:54.231032    3120 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:54.238747    3120 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0"
	W0604 15:57:55.269115    3120 cli_runner.go:211] docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 15:57:55.269149    3120 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0": (1.0302344s)
	I0604 15:57:55.269209    3120 oci.go:625] error shutdown multinode-20220604155719-5712: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:56.289492    3120 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:57:57.330685    3120 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:57:57.330685    3120 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0411815s)
	I0604 15:57:57.330685    3120 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:57.330685    3120 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 15:57:57.330685    3120 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:57.811574    3120 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:57:58.851594    3120 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:57:58.851594    3120 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0398103s)
	I0604 15:57:58.851594    3120 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:58.851594    3120 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 15:57:58.851594    3120 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:57:59.764633    3120 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:58:00.763447    3120 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:58:00.763502    3120 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:00.763502    3120 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 15:58:00.763502    3120 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:01.416297    3120 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:58:02.425365    3120 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:58:02.425365    3120 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0090569s)
	I0604 15:58:02.425365    3120 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:02.425365    3120 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 15:58:02.425365    3120 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:03.542014    3120 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:58:04.578487    3120 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:58:04.578487    3120 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0364624s)
	I0604 15:58:04.578487    3120 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:04.578487    3120 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 15:58:04.578487    3120 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:06.113620    3120 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:58:07.151603    3120 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:58:07.151603    3120 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0377272s)
	I0604 15:58:07.151603    3120 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:07.151603    3120 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 15:58:07.151603    3120 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:10.205619    3120 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:58:11.249054    3120 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:58:11.249054    3120 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0430976s)
	I0604 15:58:11.249129    3120 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:11.249196    3120 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 15:58:11.249264    3120 oci.go:88] couldn't shut down multinode-20220604155719-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	 
	I0604 15:58:11.256228    3120 cli_runner.go:164] Run: docker rm -f -v multinode-20220604155719-5712
	I0604 15:58:12.280833    3120 cli_runner.go:217] Completed: docker rm -f -v multinode-20220604155719-5712: (1.0244679s)
	I0604 15:58:12.292086    3120 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220604155719-5712
	W0604 15:58:13.345287    3120 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220604155719-5712 returned with exit code 1
	I0604 15:58:13.345409    3120 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220604155719-5712: (1.053145s)
	I0604 15:58:13.353094    3120 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:58:14.376777    3120 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:58:14.376777    3120 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.023442s)
	I0604 15:58:14.386397    3120 network_create.go:272] running [docker network inspect multinode-20220604155719-5712] to gather additional debugging logs...
	I0604 15:58:14.386397    3120 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712
	W0604 15:58:15.407127    3120 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 returned with exit code 1
	I0604 15:58:15.407156    3120 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712: (1.0206223s)
	I0604 15:58:15.407215    3120 network_create.go:275] error running [docker network inspect multinode-20220604155719-5712]: docker network inspect multinode-20220604155719-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220604155719-5712
	I0604 15:58:15.407495    3120 network_create.go:277] output of [docker network inspect multinode-20220604155719-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220604155719-5712
	
	** /stderr **
	W0604 15:58:15.408614    3120 delete.go:139] delete failed (probably ok) <nil>
	I0604 15:58:15.408656    3120 fix.go:115] Sleeping 1 second for extra luck!
	I0604 15:58:16.423272    3120 start.go:131] createHost starting for "" (driver="docker")
	I0604 15:58:16.427569    3120 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 15:58:16.427717    3120 start.go:165] libmachine.API.Create for "multinode-20220604155719-5712" (driver="docker")
	I0604 15:58:16.427717    3120 client.go:168] LocalClient.Create starting
	I0604 15:58:16.428306    3120 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 15:58:16.428588    3120 main.go:134] libmachine: Decoding PEM data...
	I0604 15:58:16.428588    3120 main.go:134] libmachine: Parsing certificate...
	I0604 15:58:16.428844    3120 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 15:58:16.428985    3120 main.go:134] libmachine: Decoding PEM data...
	I0604 15:58:16.429079    3120 main.go:134] libmachine: Parsing certificate...
	I0604 15:58:16.437643    3120 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 15:58:17.452791    3120 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 15:58:17.452791    3120 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0151375s)
	I0604 15:58:17.460907    3120 network_create.go:272] running [docker network inspect multinode-20220604155719-5712] to gather additional debugging logs...
	I0604 15:58:17.460907    3120 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712
	W0604 15:58:18.464790    3120 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 returned with exit code 1
	I0604 15:58:18.464790    3120 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712: (1.0038729s)
	I0604 15:58:18.464790    3120 network_create.go:275] error running [docker network inspect multinode-20220604155719-5712]: docker network inspect multinode-20220604155719-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220604155719-5712
	I0604 15:58:18.464790    3120 network_create.go:277] output of [docker network inspect multinode-20220604155719-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220604155719-5712
	
	** /stderr **
	I0604 15:58:18.472186    3120 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 15:58:19.498460    3120 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.025546s)
	I0604 15:58:19.515897    3120 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006328] amended:false}} dirty:map[] misses:0}
	I0604 15:58:19.516442    3120 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:58:19.537159    3120 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006328] amended:true}} dirty:map[192.168.49.0:0xc000006328 192.168.58.0:0xc0007f6370] misses:0}
	I0604 15:58:19.537159    3120 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 15:58:19.537159    3120 network_create.go:115] attempt to create docker network multinode-20220604155719-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 15:58:19.545286    3120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712
	W0604 15:58:20.579450    3120 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712 returned with exit code 1
	I0604 15:58:20.579497    3120 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: (1.0339695s)
	E0604 15:58:20.579525    3120 network_create.go:104] error while trying to create docker network multinode-20220604155719-5712 192.168.58.0/24: create docker network multinode-20220604155719-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5820e98affdf910abf497cfe4c90fad756bc1ef6a50eba918858a73efd35a738 (br-5820e98affdf): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 15:58:20.579635    3120 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5820e98affdf910abf497cfe4c90fad756bc1ef6a50eba918858a73efd35a738 (br-5820e98affdf): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5820e98affdf910abf497cfe4c90fad756bc1ef6a50eba918858a73efd35a738 (br-5820e98affdf): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 15:58:20.592642    3120 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 15:58:21.612949    3120 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0202332s)
	I0604 15:58:21.620429    3120 cli_runner.go:164] Run: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 15:58:22.641144    3120 cli_runner.go:211] docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 15:58:22.641210    3120 cli_runner.go:217] Completed: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0206565s)
	I0604 15:58:22.641237    3120 client.go:171] LocalClient.Create took 6.2134537s
	I0604 15:58:24.652900    3120 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:58:24.658917    3120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 15:58:25.675105    3120 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 15:58:25.675105    3120 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0159357s)
	I0604 15:58:25.675105    3120 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:26.018826    3120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 15:58:27.021120    3120 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 15:58:27.021192    3120 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0021159s)
	W0604 15:58:27.021192    3120 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 15:58:27.021192    3120 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:27.031559    3120 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:58:27.037945    3120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 15:58:28.078376    3120 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 15:58:28.078376    3120 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0402075s)
	I0604 15:58:28.078376    3120 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:28.321389    3120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 15:58:29.327004    3120 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 15:58:29.327004    3120 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0056042s)
	W0604 15:58:29.327004    3120 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 15:58:29.327004    3120 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:29.327004    3120 start.go:134] duration metric: createHost completed in 12.9034428s
	I0604 15:58:29.337009    3120 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 15:58:29.343014    3120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 15:58:30.374827    3120 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 15:58:30.374827    3120 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0315082s)
	I0604 15:58:30.374827    3120 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:30.640683    3120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 15:58:31.658675    3120 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 15:58:31.658675    3120 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0178141s)
	W0604 15:58:31.658805    3120 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 15:58:31.658805    3120 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:31.669645    3120 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 15:58:31.675037    3120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 15:58:32.677330    3120 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 15:58:32.677330    3120 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0022822s)
	I0604 15:58:32.677330    3120 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:32.898994    3120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 15:58:33.967201    3120 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 15:58:33.967281    3120 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0680854s)
	W0604 15:58:33.967467    3120 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 15:58:33.967528    3120 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 15:58:33.967528    3120 fix.go:57] fixHost completed within 44.8909207s
	I0604 15:58:33.967591    3120 start.go:81] releasing machines lock for "multinode-20220604155719-5712", held for 44.8909831s
	W0604 15:58:33.968060    3120 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-20220604155719-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220604155719-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	I0604 15:58:33.973435    3120 out.go:177] 
	W0604 15:58:33.976105    3120 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	W0604 15:58:33.976105    3120 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 15:58:33.976696    3120 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 15:58:33.980057    3120 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:85: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.0868135s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.7954258s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:58:37.973008    1324 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (78.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (1.854057s)

                                                
                                                
** stderr ** 
	error: cluster "multinode-20220604155719-5712" does not exist

                                                
                                                
** /stderr **
multinode_test.go:481: failed to create busybox deployment to multinode cluster
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- rollout status deployment/busybox: exit status 1 (1.8595803s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220604155719-5712"

                                                
                                                
** /stderr **
multinode_test.go:486: failed to deploy busybox to multinode cluster
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:490: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (1.8319909s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220604155719-5712"

                                                
                                                
** /stderr **
multinode_test.go:492: failed to retrieve Pod IPs
multinode_test.go:496: expected 2 Pod IPs but got 1
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:502: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (1.8155907s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220604155719-5712"

                                                
                                                
** /stderr **
multinode_test.go:504: failed get Pod names
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- exec  -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- exec  -- nslookup kubernetes.io: exit status 1 (1.8281137s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220604155719-5712"

                                                
                                                
** /stderr **
multinode_test.go:512: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- exec  -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- exec  -- nslookup kubernetes.default: exit status 1 (1.8608112s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220604155719-5712"

                                                
                                                
** /stderr **
multinode_test.go:522: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (1.8364359s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220604155719-5712"

                                                
                                                
** /stderr **
multinode_test.go:530: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.115149s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.7496466s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:58:54.738240    4476 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (16.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (5.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:538: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220604155719-5712 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (1.8437353s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220604155719-5712"

                                                
                                                
** /stderr **
multinode_test.go:540: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.0556498s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.7518789s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:59:00.400029    1640 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (5.66s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (6.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220604155719-5712 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220604155719-5712 -v 3 --alsologtostderr: exit status 80 (3.0671769s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:59:00.656660    7196 out.go:296] Setting OutFile to fd 972 ...
	I0604 15:59:00.720650    7196 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:59:00.720650    7196 out.go:309] Setting ErrFile to fd 656...
	I0604 15:59:00.720650    7196 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:59:00.731662    7196 mustload.go:65] Loading cluster: multinode-20220604155719-5712
	I0604 15:59:00.731662    7196 config.go:178] Loaded profile config "multinode-20220604155719-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 15:59:00.745657    7196 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:59:03.183656    7196 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:59:03.183656    7196 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (2.4379731s)
	I0604 15:59:03.186664    7196 out.go:177] 
	W0604 15:59:03.188657    7196 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 15:59:03.189649    7196 out.go:239] * 
	* 
	W0604 15:59:03.458155    7196 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_24.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_24.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 15:59:03.461355    7196 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:110: failed to add node to current cluster. args "out/minikube-windows-amd64.exe node add -p multinode-20220604155719-5712 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.096046s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.8133984s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:59:07.386150    8428 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (6.99s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (7.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.7992559s)
multinode_test.go:153: expected profile "multinode-20220604155719-5712" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-20220604155719-5712\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-20220604155719-5712\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOp
t\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.23.6\",\"ClusterName\":\"multinode-20220604155719-5712\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":[{\"Component\":\"kubelet\",\"Key\":\"cni-conf-dir\",\"Value\":\"/etc/cni/net.mk\"}],\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName
\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.23.6\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube2:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false},\"Active\":false}]}"*. args: "out/minikube-windows-amd64.exe profile lis
t --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.0379802s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.7941796s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:59:15.026797    8448 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (7.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --output json --alsologtostderr: exit status 7 (2.7364537s)

                                                
                                                
-- stdout --
	{"Name":"multinode-20220604155719-5712","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:59:15.290272    1396 out.go:296] Setting OutFile to fd 628 ...
	I0604 15:59:15.345916    1396 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:59:15.345916    1396 out.go:309] Setting ErrFile to fd 880...
	I0604 15:59:15.345916    1396 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:59:15.356075    1396 out.go:303] Setting JSON to true
	I0604 15:59:15.356075    1396 mustload.go:65] Loading cluster: multinode-20220604155719-5712
	I0604 15:59:15.356221    1396 config.go:178] Loaded profile config "multinode-20220604155719-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 15:59:15.356853    1396 status.go:253] checking status of multinode-20220604155719-5712 ...
	I0604 15:59:15.372130    1396 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:59:17.762796    1396 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:59:17.762960    1396 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (2.3906401s)
	I0604 15:59:17.763244    1396 status.go:328] multinode-20220604155719-5712 host status = "" (err=state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	)
	I0604 15:59:17.763244    1396 status.go:255] multinode-20220604155719-5712 status: &{Name:multinode-20220604155719-5712 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0604 15:59:17.763244    1396 status.go:258] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	E0604 15:59:17.763418    1396 status.go:261] The "multinode-20220604155719-5712" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:178: failed to decode json from status: args "out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.0815714s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.7810123s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:59:21.635675    8348 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (6.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 node stop m03
multinode_test.go:208: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 node stop m03: exit status 85 (628.7413ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_a721422985a44b3996d93fcfe1a29c6759a29372_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:210: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 node stop m03": exit status 85
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status: exit status 7 (2.743854s)

                                                
                                                
-- stdout --
	multinode-20220604155719-5712
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:59:25.009171    7116 status.go:258] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	E0604 15:59:25.009236    7116 status.go:261] The "multinode-20220604155719-5712" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --alsologtostderr: exit status 7 (2.7524594s)

                                                
                                                
-- stdout --
	multinode-20220604155719-5712
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:59:25.273858    4540 out.go:296] Setting OutFile to fd 960 ...
	I0604 15:59:25.333829    4540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:59:25.333829    4540 out.go:309] Setting ErrFile to fd 872...
	I0604 15:59:25.333829    4540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:59:25.347831    4540 out.go:303] Setting JSON to false
	I0604 15:59:25.347831    4540 mustload.go:65] Loading cluster: multinode-20220604155719-5712
	I0604 15:59:25.349837    4540 config.go:178] Loaded profile config "multinode-20220604155719-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 15:59:25.349837    4540 status.go:253] checking status of multinode-20220604155719-5712 ...
	I0604 15:59:25.362819    4540 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 15:59:27.761770    4540 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 15:59:27.761770    4540 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (2.3986292s)
	I0604 15:59:27.761770    4540 status.go:328] multinode-20220604155719-5712 host status = "" (err=state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	)
	I0604 15:59:27.761770    4540 status.go:255] multinode-20220604155719-5712 status: &{Name:multinode-20220604155719-5712 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0604 15:59:27.761770    4540 status.go:258] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	E0604 15:59:27.761770    4540 status.go:261] The "multinode-20220604155719-5712" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:227: incorrect number of running kubelets: args "out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --alsologtostderr": multinode-20220604155719-5712
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:231: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --alsologtostderr": multinode-20220604155719-5712
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:235: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --alsologtostderr": multinode-20220604155719-5712
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.0690339s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.7534956s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:59:31.592951    6364 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:242: (dbg) Done: docker version -f {{.Server.Version}}: (1.0845506s)
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 node start m03 --alsologtostderr: exit status 85 (646.2066ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:59:32.957879    6388 out.go:296] Setting OutFile to fd 816 ...
	I0604 15:59:33.031185    6388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:59:33.031269    6388 out.go:309] Setting ErrFile to fd 616...
	I0604 15:59:33.031269    6388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:59:33.047158    6388 mustload.go:65] Loading cluster: multinode-20220604155719-5712
	I0604 15:59:33.048256    6388 config.go:178] Loaded profile config "multinode-20220604155719-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 15:59:33.062185    6388 out.go:177] 
	W0604 15:59:33.065649    6388 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	W0604 15:59:33.065649    6388 out.go:239] * 
	* 
	W0604 15:59:33.321415    6388 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_17.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_17.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 15:59:33.325576    6388 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:254: I0604 15:59:32.957879    6388 out.go:296] Setting OutFile to fd 816 ...
I0604 15:59:33.031185    6388 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0604 15:59:33.031269    6388 out.go:309] Setting ErrFile to fd 616...
I0604 15:59:33.031269    6388 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0604 15:59:33.047158    6388 mustload.go:65] Loading cluster: multinode-20220604155719-5712
I0604 15:59:33.048256    6388 config.go:178] Loaded profile config "multinode-20220604155719-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0604 15:59:33.062185    6388 out.go:177] 
W0604 15:59:33.065649    6388 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
W0604 15:59:33.065649    6388 out.go:239] * 
* 
W0604 15:59:33.321415    6388 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                       │
│    * If the above advice does not help, please let us know:                                                           │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
│                                                                                                                       │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
│    * Please also attach the following file to the GitHub issue:                                                       │
│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_17.log    │
│                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                       │
│    * If the above advice does not help, please let us know:                                                           │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
│                                                                                                                       │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
│    * Please also attach the following file to the GitHub issue:                                                       │
│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_17.log    │
│                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0604 15:59:33.325576    6388 out.go:177] 
multinode_test.go:255: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 node start m03 --alsologtostderr": exit status 85
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status
multinode_test.go:259: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status: exit status 7 (2.8306281s)

                                                
                                                
-- stdout --
	multinode-20220604155719-5712
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:59:36.164311    3280 status.go:258] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	E0604 15:59:36.164311    3280 status.go:261] The "multinode-20220604155719-5712" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.0701968s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.6957976s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:59:39.939367    5632 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (8.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (136.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220604155719-5712
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20220604155719-5712
multinode_test.go:288: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p multinode-20220604155719-5712: exit status 82 (22.0706431s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20220604155719-5712"  ...
	* Stopping node "multinode-20220604155719-5712"  ...
	* Stopping node "multinode-20220604155719-5712"  ...
	* Stopping node "multinode-20220604155719-5712"  ...
	* Stopping node "multinode-20220604155719-5712"  ...
	* Stopping node "multinode-20220604155719-5712"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:59:45.524437    7968 daemonize_windows.go:38] error terminating scheduled stop for profile multinode-20220604155719-5712: stopping schedule-stop service for profile multinode-20220604155719-5712: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20220604155719-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:290: failed to run minikube stop. args "out/minikube-windows-amd64.exe node list -p multinode-20220604155719-5712" : exit status 82
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712 --wait=true -v=8 --alsologtostderr: exit status 60 (1m50.0295784s)

                                                
                                                
-- stdout --
	* [multinode-20220604155719-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20220604155719-5712 in cluster multinode-20220604155719-5712
	* Pulling base image ...
	* docker "multinode-20220604155719-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20220604155719-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:00:02.623204    8680 out.go:296] Setting OutFile to fd 896 ...
	I0604 16:00:02.704512    8680 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:00:02.704512    8680 out.go:309] Setting ErrFile to fd 808...
	I0604 16:00:02.704512    8680 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:00:02.715937    8680 out.go:303] Setting JSON to false
	I0604 16:00:02.717859    8680 start.go:115] hostinfo: {"hostname":"minikube2","uptime":9474,"bootTime":1654348928,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:00:02.718714    8680 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:00:02.726148    8680 out.go:177] * [multinode-20220604155719-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:00:02.729396    8680 notify.go:193] Checking for updates...
	I0604 16:00:02.731816    8680 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:00:02.734622    8680 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:00:02.736919    8680 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:00:02.739424    8680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:00:02.744462    8680 config.go:178] Loaded profile config "multinode-20220604155719-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:00:02.744462    8680 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:00:05.404294    8680 docker.go:137] docker version: linux-20.10.16
	I0604 16:00:05.412798    8680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:00:07.391658    8680 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9786424s)
	I0604 16:00:07.392376    8680 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:00:06.4154848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:00:07.408783    8680 out.go:177] * Using the docker driver based on existing profile
	I0604 16:00:07.411467    8680 start.go:284] selected driver: docker
	I0604 16:00:07.411595    8680 start.go:806] validating driver "docker" against &{Name:multinode-20220604155719-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220604155719-5712 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:00:07.411595    8680 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:00:07.431586    8680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:00:09.413378    8680 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9817353s)
	I0604 16:00:09.413417    8680 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:00:08.433117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:00:09.522987    8680 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:00:09.523184    8680 cni.go:95] Creating CNI manager for ""
	I0604 16:00:09.523184    8680 cni.go:156] 1 nodes found, recommending kindnet
	I0604 16:00:09.523184    8680 start_flags.go:306] config:
	{Name:multinode-20220604155719-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220604155719-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false}
	I0604 16:00:09.527756    8680 out.go:177] * Starting control plane node multinode-20220604155719-5712 in cluster multinode-20220604155719-5712
	I0604 16:00:09.530215    8680 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:00:09.532384    8680 out.go:177] * Pulling base image ...
	I0604 16:00:09.536546    8680 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:00:09.536546    8680 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:00:09.536812    8680 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:00:09.536812    8680 cache.go:57] Caching tarball of preloaded images
	I0604 16:00:09.537271    8680 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:00:09.537555    8680 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:00:09.537972    8680 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220604155719-5712\config.json ...
	I0604 16:00:10.605149    8680 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:00:10.605149    8680 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:00:10.605149    8680 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:00:10.605149    8680 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:00:10.605149    8680 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:00:10.605673    8680 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:00:10.605893    8680 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:00:10.605989    8680 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:00:10.605989    8680 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:00:12.826179    8680 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-1644430292: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-1644430292: read-only file system"}
	I0604 16:00:12.826259    8680 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:00:12.826334    8680 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:00:12.826473    8680 start.go:352] acquiring machines lock for multinode-20220604155719-5712: {Name:mk7df06d9ba91b0f06c5e69474f69126e3a597c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:00:12.826708    8680 start.go:356] acquired machines lock for "multinode-20220604155719-5712" in 160µs
	I0604 16:00:12.826925    8680 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:00:12.826999    8680 fix.go:55] fixHost starting: 
	I0604 16:00:12.845166    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:00:13.879900    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:00:13.879900    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0346125s)
	I0604 16:00:13.879900    8680 fix.go:103] recreateIfNeeded on multinode-20220604155719-5712: state= err=unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:13.879900    8680 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:00:13.886684    8680 out.go:177] * docker "multinode-20220604155719-5712" container is missing, will recreate.
	I0604 16:00:13.890209    8680 delete.go:124] DEMOLISHING multinode-20220604155719-5712 ...
	I0604 16:00:13.904272    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:00:14.950370    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:00:14.950404    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0459386s)
	W0604 16:00:14.950498    8680 stop.go:75] unable to get state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:14.950523    8680 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:14.967955    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:00:16.046377    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:00:16.046377    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0784103s)
	I0604 16:00:16.046377    8680 delete.go:82] Unable to get host status for multinode-20220604155719-5712, assuming it has already been deleted: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:16.055450    8680 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220604155719-5712
	W0604 16:00:17.078250    8680 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220604155719-5712 returned with exit code 1
	I0604 16:00:17.078324    8680 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220604155719-5712: (1.0225267s)
	I0604 16:00:17.078324    8680 kic.go:356] could not find the container multinode-20220604155719-5712 to remove it. will try anyways
	I0604 16:00:17.087252    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:00:18.152798    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:00:18.153093    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0655351s)
	W0604 16:00:18.153262    8680 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:18.164790    8680 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0"
	W0604 16:00:19.199039    8680 cli_runner.go:211] docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:00:19.199092    8680 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0": (1.0340718s)
	I0604 16:00:19.199092    8680 oci.go:625] error shutdown multinode-20220604155719-5712: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:20.209843    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:00:21.221422    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:00:21.221422    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0115053s)
	I0604 16:00:21.221422    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:21.221422    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:00:21.221422    8680 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:21.793620    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:00:22.826203    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:00:22.826203    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0325721s)
	I0604 16:00:22.826203    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:22.826203    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:00:22.826203    8680 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:23.921931    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:00:24.927066    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:00:24.927066    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0047074s)
	I0604 16:00:24.927066    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:24.927066    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:00:24.927066    8680 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:26.257576    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:00:27.266500    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:00:27.266500    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0086178s)
	I0604 16:00:27.266671    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:27.266671    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:00:27.266671    8680 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:28.867579    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:00:29.923579    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:00:29.923579    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0557315s)
	I0604 16:00:29.923579    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:29.923579    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:00:29.923579    8680 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:32.286252    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:00:33.309526    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:00:33.309526    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0229346s)
	I0604 16:00:33.309526    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:33.309526    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:00:33.309526    8680 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:37.839854    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:00:38.849793    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:00:38.849793    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0097439s)
	I0604 16:00:38.850038    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:38.850094    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:00:38.850094    8680 oci.go:88] couldn't shut down multinode-20220604155719-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	 
	I0604 16:00:38.858308    8680 cli_runner.go:164] Run: docker rm -f -v multinode-20220604155719-5712
	I0604 16:00:39.893012    8680 cli_runner.go:217] Completed: docker rm -f -v multinode-20220604155719-5712: (1.0346934s)
	I0604 16:00:39.902458    8680 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220604155719-5712
	W0604 16:00:40.910627    8680 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220604155719-5712 returned with exit code 1
	I0604 16:00:40.910627    8680 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220604155719-5712: (1.0081585s)
	I0604 16:00:40.918314    8680 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:00:41.927923    8680 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:00:41.928251    8680 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0095981s)
	I0604 16:00:41.939770    8680 network_create.go:272] running [docker network inspect multinode-20220604155719-5712] to gather additional debugging logs...
	I0604 16:00:41.940786    8680 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712
	W0604 16:00:42.959367    8680 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 returned with exit code 1
	I0604 16:00:42.959367    8680 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712: (1.0185697s)
	I0604 16:00:42.959367    8680 network_create.go:275] error running [docker network inspect multinode-20220604155719-5712]: docker network inspect multinode-20220604155719-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220604155719-5712
	I0604 16:00:42.959367    8680 network_create.go:277] output of [docker network inspect multinode-20220604155719-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220604155719-5712
	
	** /stderr **
	W0604 16:00:42.960602    8680 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:00:42.960650    8680 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:00:43.968644    8680 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:00:43.975087    8680 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:00:43.975597    8680 start.go:165] libmachine.API.Create for "multinode-20220604155719-5712" (driver="docker")
	I0604 16:00:43.975597    8680 client.go:168] LocalClient.Create starting
	I0604 16:00:43.975789    8680 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:00:43.976438    8680 main.go:134] libmachine: Decoding PEM data...
	I0604 16:00:43.976509    8680 main.go:134] libmachine: Parsing certificate...
	I0604 16:00:43.976782    8680 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:00:43.977014    8680 main.go:134] libmachine: Decoding PEM data...
	I0604 16:00:43.977075    8680 main.go:134] libmachine: Parsing certificate...
	I0604 16:00:43.987246    8680 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:00:45.012254    8680 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:00:45.012342    8680 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.02483s)
	I0604 16:00:45.019459    8680 network_create.go:272] running [docker network inspect multinode-20220604155719-5712] to gather additional debugging logs...
	I0604 16:00:45.020406    8680 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712
	W0604 16:00:46.036239    8680 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 returned with exit code 1
	I0604 16:00:46.036239    8680 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712: (1.0158218s)
	I0604 16:00:46.036239    8680 network_create.go:275] error running [docker network inspect multinode-20220604155719-5712]: docker network inspect multinode-20220604155719-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220604155719-5712
	I0604 16:00:46.036239    8680 network_create.go:277] output of [docker network inspect multinode-20220604155719-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220604155719-5712
	
	** /stderr **
	I0604 16:00:46.044267    8680 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:00:47.089402    8680 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0450938s)
	I0604 16:00:47.108135    8680 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005c4680] misses:0}
	I0604 16:00:47.108135    8680 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:00:47.108135    8680 network_create.go:115] attempt to create docker network multinode-20220604155719-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:00:47.116558    8680 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712
	W0604 16:00:48.121738    8680 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712 returned with exit code 1
	I0604 16:00:48.121738    8680 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: (1.0051699s)
	E0604 16:00:48.121738    8680 network_create.go:104] error while trying to create docker network multinode-20220604155719-5712 192.168.49.0/24: create docker network multinode-20220604155719-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5e1e3422b4cfaecdf20286326678aa1dcba88a1ccfa6597e575c1cbba3222584 (br-5e1e3422b4cf): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:00:48.121738    8680 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5e1e3422b4cfaecdf20286326678aa1dcba88a1ccfa6597e575c1cbba3222584 (br-5e1e3422b4cf): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5e1e3422b4cfaecdf20286326678aa1dcba88a1ccfa6597e575c1cbba3222584 (br-5e1e3422b4cf): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:00:48.138151    8680 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:00:49.170791    8680 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0326293s)
	I0604 16:00:49.177194    8680 cli_runner.go:164] Run: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:00:50.205983    8680 cli_runner.go:211] docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:00:50.205983    8680 cli_runner.go:217] Completed: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0287775s)
	I0604 16:00:50.205983    8680 client.go:171] LocalClient.Create took 6.2302522s
	I0604 16:00:52.218311    8680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:00:52.224781    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:00:53.275352    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:00:53.275396    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0504517s)
	I0604 16:00:53.275396    8680 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:53.456275    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:00:54.485673    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:00:54.485923    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0293869s)
	W0604 16:00:54.486104    8680 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:00:54.486104    8680 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:54.496251    8680 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:00:54.502886    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:00:55.502154    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:00:55.502154    8680 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:55.711296    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:00:56.724606    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:00:56.724671    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0132416s)
	W0604 16:00:56.724757    8680 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:00:56.724757    8680 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:56.724757    8680 start.go:134] duration metric: createHost completed in 12.7559775s
	I0604 16:00:56.735002    8680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:00:56.740928    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:00:57.791458    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:00:57.791458    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0505196s)
	I0604 16:00:57.791458    8680 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:58.128211    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:00:59.149949    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:00:59.149949    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0217269s)
	W0604 16:00:59.149949    8680 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:00:59.149949    8680 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:00:59.160984    8680 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:00:59.168338    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:01:00.213988    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:00.214019    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0454838s)
	I0604 16:01:00.214132    8680 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:00.450268    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:01:01.470712    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:01.470712    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0204329s)
	W0604 16:01:01.470712    8680 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:01:01.470712    8680 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:01.470712    8680 fix.go:57] fixHost completed within 48.6431955s
	I0604 16:01:01.470712    8680 start.go:81] releasing machines lock for "multinode-20220604155719-5712", held for 48.6434529s
	W0604 16:01:01.470712    8680 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	W0604 16:01:01.471652    8680 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	I0604 16:01:01.471688    8680 start.go:614] Will try again in 5 seconds ...
	I0604 16:01:06.486646    8680 start.go:352] acquiring machines lock for multinode-20220604155719-5712: {Name:mk7df06d9ba91b0f06c5e69474f69126e3a597c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:01:06.486646    8680 start.go:356] acquired machines lock for "multinode-20220604155719-5712" in 0s
	I0604 16:01:06.486646    8680 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:01:06.487196    8680 fix.go:55] fixHost starting: 
	I0604 16:01:06.508588    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:01:07.560639    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:01:07.560639    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0520406s)
	I0604 16:01:07.560639    8680 fix.go:103] recreateIfNeeded on multinode-20220604155719-5712: state= err=unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:07.560639    8680 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:01:07.564812    8680 out.go:177] * docker "multinode-20220604155719-5712" container is missing, will recreate.
	I0604 16:01:07.567094    8680 delete.go:124] DEMOLISHING multinode-20220604155719-5712 ...
	I0604 16:01:07.581951    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:01:08.623550    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:01:08.623659    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0415509s)
	W0604 16:01:08.623659    8680 stop.go:75] unable to get state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:08.623659    8680 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:08.638576    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:01:09.672814    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:01:09.672814    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0341653s)
	I0604 16:01:09.672814    8680 delete.go:82] Unable to get host status for multinode-20220604155719-5712, assuming it has already been deleted: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:09.681084    8680 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220604155719-5712
	W0604 16:01:10.708782    8680 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:10.708782    8680 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220604155719-5712: (1.0276871s)
	I0604 16:01:10.708782    8680 kic.go:356] could not find the container multinode-20220604155719-5712 to remove it. will try anyways
	I0604 16:01:10.717112    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:01:11.728172    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:01:11.728172    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0107806s)
	W0604 16:01:11.728172    8680 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:11.735359    8680 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0"
	W0604 16:01:12.730413    8680 cli_runner.go:211] docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:01:12.730442    8680 oci.go:625] error shutdown multinode-20220604155719-5712: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:13.739880    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:01:14.793475    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:01:14.793475    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0533702s)
	I0604 16:01:14.793475    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:14.793475    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:01:14.793475    8680 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:15.291887    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:01:16.332443    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:01:16.332443    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0404422s)
	I0604 16:01:16.332659    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:16.332691    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:01:16.332735    8680 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:16.942019    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:01:17.993264    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:01:17.993264    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0511791s)
	I0604 16:01:17.993343    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:17.993343    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:01:17.993420    8680 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:18.903314    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:01:19.944066    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:01:19.944066    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0405656s)
	I0604 16:01:19.944286    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:19.944286    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:01:19.944286    8680 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:21.949088    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:01:22.940931    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:01:22.940931    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:22.940931    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:01:22.940931    8680 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:24.782883    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:01:25.796783    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:01:25.796783    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.013889s)
	I0604 16:01:25.796783    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:25.796783    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:01:25.796783    8680 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:28.483557    8680 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:01:29.498962    8680 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:01:29.499011    8680 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0152708s)
	I0604 16:01:29.499205    8680 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:29.499205    8680 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:01:29.499256    8680 oci.go:88] couldn't shut down multinode-20220604155719-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	 
	I0604 16:01:29.507676    8680 cli_runner.go:164] Run: docker rm -f -v multinode-20220604155719-5712
	I0604 16:01:30.535037    8680 cli_runner.go:217] Completed: docker rm -f -v multinode-20220604155719-5712: (1.0272843s)
	I0604 16:01:30.542364    8680 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220604155719-5712
	W0604 16:01:31.538883    8680 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:31.547125    8680 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:01:32.568435    8680 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:01:32.568592    8680 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0211132s)
	I0604 16:01:32.576199    8680 network_create.go:272] running [docker network inspect multinode-20220604155719-5712] to gather additional debugging logs...
	I0604 16:01:32.576199    8680 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712
	W0604 16:01:33.612234    8680 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:33.612234    8680 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712: (1.0360238s)
	I0604 16:01:33.612234    8680 network_create.go:275] error running [docker network inspect multinode-20220604155719-5712]: docker network inspect multinode-20220604155719-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220604155719-5712
	I0604 16:01:33.612234    8680 network_create.go:277] output of [docker network inspect multinode-20220604155719-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220604155719-5712
	
	** /stderr **
	W0604 16:01:33.613549    8680 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:01:33.613747    8680 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:01:34.614808    8680 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:01:34.621103    8680 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:01:34.621474    8680 start.go:165] libmachine.API.Create for "multinode-20220604155719-5712" (driver="docker")
	I0604 16:01:34.621474    8680 client.go:168] LocalClient.Create starting
	I0604 16:01:34.622089    8680 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:01:34.622089    8680 main.go:134] libmachine: Decoding PEM data...
	I0604 16:01:34.622089    8680 main.go:134] libmachine: Parsing certificate...
	I0604 16:01:34.622089    8680 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:01:34.622659    8680 main.go:134] libmachine: Decoding PEM data...
	I0604 16:01:34.622659    8680 main.go:134] libmachine: Parsing certificate...
	I0604 16:01:34.630733    8680 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:01:35.675379    8680 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:01:35.675554    8680 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.044495s)
	I0604 16:01:35.683278    8680 network_create.go:272] running [docker network inspect multinode-20220604155719-5712] to gather additional debugging logs...
	I0604 16:01:35.683278    8680 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712
	W0604 16:01:36.738372    8680 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:36.738372    8680 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712: (1.0550822s)
	I0604 16:01:36.738372    8680 network_create.go:275] error running [docker network inspect multinode-20220604155719-5712]: docker network inspect multinode-20220604155719-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220604155719-5712
	I0604 16:01:36.738372    8680 network_create.go:277] output of [docker network inspect multinode-20220604155719-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220604155719-5712
	
	** /stderr **
	I0604 16:01:36.745967    8680 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:01:37.753190    8680 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0071004s)
	I0604 16:01:37.769489    8680 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005c4680] amended:false}} dirty:map[] misses:0}
	I0604 16:01:37.769489    8680 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:01:37.784251    8680 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005c4680] amended:true}} dirty:map[192.168.49.0:0xc0005c4680 192.168.58.0:0xc000006770] misses:0}
	I0604 16:01:37.784251    8680 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:01:37.784620    8680 network_create.go:115] attempt to create docker network multinode-20220604155719-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:01:37.792775    8680 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712
	W0604 16:01:38.816057    8680 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:38.816091    8680 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: (1.0231069s)
	E0604 16:01:38.816290    8680 network_create.go:104] error while trying to create docker network multinode-20220604155719-5712 192.168.58.0/24: create docker network multinode-20220604155719-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e25d91e96950e8ddf63b7967728b4233ef0b057904c60257555fc6ef76f1aefe (br-e25d91e96950): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:01:38.816603    8680 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e25d91e96950e8ddf63b7967728b4233ef0b057904c60257555fc6ef76f1aefe (br-e25d91e96950): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e25d91e96950e8ddf63b7967728b4233ef0b057904c60257555fc6ef76f1aefe (br-e25d91e96950): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:01:38.830374    8680 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:01:39.861146    8680 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0306872s)
	I0604 16:01:39.868143    8680 cli_runner.go:164] Run: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:01:40.909878    8680 cli_runner.go:211] docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:01:40.909878    8680 cli_runner.go:217] Completed: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0417244s)
	I0604 16:01:40.909878    8680 client.go:171] LocalClient.Create took 6.288337s
	I0604 16:01:42.924265    8680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:01:42.930354    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:01:43.958653    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:43.958786    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0281261s)
	I0604 16:01:43.958786    8680 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:44.243206    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:01:45.281316    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:45.281316    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0379215s)
	W0604 16:01:45.281671    8680 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:01:45.281763    8680 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:45.292051    8680 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:01:45.298459    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:01:46.300610    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:46.300610    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0021406s)
	I0604 16:01:46.300610    8680 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:46.516857    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:01:47.565471    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:47.565471    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0486027s)
	W0604 16:01:47.565471    8680 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:01:47.565471    8680 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:47.565471    8680 start.go:134] duration metric: createHost completed in 12.9501924s
	I0604 16:01:47.577329    8680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:01:47.585402    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:01:48.613493    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:48.613493    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0280797s)
	I0604 16:01:48.613493    8680 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:48.949517    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:01:49.957171    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:49.957171    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0074181s)
	W0604 16:01:49.957171    8680 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:01:49.957171    8680 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:49.967714    8680 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:01:49.973713    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:01:50.962088    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:50.962216    8680 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:51.323936    8680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:01:52.360477    8680 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:01:52.360477    8680 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0361422s)
	W0604 16:01:52.360477    8680 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:01:52.360477    8680 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:01:52.360477    8680 fix.go:57] fixHost completed within 45.8727954s
	I0604 16:01:52.360477    8680 start.go:81] releasing machines lock for "multinode-20220604155719-5712", held for 45.8733454s
	W0604 16:01:52.361179    8680 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-20220604155719-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220604155719-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	I0604 16:01:52.366341    8680 out.go:177] 
	W0604 16:01:52.368708    8680 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	W0604 16:01:52.368986    8680 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:01:52.369079    8680 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:01:52.372206    8680 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-20220604155719-5712" : exit status 60
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220604155719-5712
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.0929973s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.7986259s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:01:56.833879    3304 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (136.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (9.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 node delete m03
multinode_test.go:392: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 node delete m03: exit status 80 (3.0841041s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_207105384607abbf0a822abec5db82084f27bc08_4.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:394: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 node delete m03": exit status 80
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --alsologtostderr
multinode_test.go:398: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --alsologtostderr: exit status 7 (2.8018219s)

                                                
                                                
-- stdout --
	multinode-20220604155719-5712
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:02:00.182958    8404 out.go:296] Setting OutFile to fd 964 ...
	I0604 16:02:00.243401    8404 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:02:00.243401    8404 out.go:309] Setting ErrFile to fd 772...
	I0604 16:02:00.243401    8404 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:02:00.254936    8404 out.go:303] Setting JSON to false
	I0604 16:02:00.255005    8404 mustload.go:65] Loading cluster: multinode-20220604155719-5712
	I0604 16:02:00.255857    8404 config.go:178] Loaded profile config "multinode-20220604155719-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:02:00.255920    8404 status.go:253] checking status of multinode-20220604155719-5712 ...
	I0604 16:02:00.269266    8404 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:02:02.719186    8404 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:02:02.719273    8404 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (2.4497113s)
	I0604 16:02:02.719408    8404 status.go:328] multinode-20220604155719-5712 host status = "" (err=state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	)
	I0604 16:02:02.719408    8404 status.go:255] multinode-20220604155719-5712 status: &{Name:multinode-20220604155719-5712 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0604 16:02:02.719475    8404 status.go:258] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	E0604 16:02:02.719475    8404 status.go:261] The "multinode-20220604155719-5712" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:400: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.0997428s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.9015066s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:02:06.729213    6648 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (31.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 stop
multinode_test.go:312: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 stop: exit status 82 (22.021233s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20220604155719-5712"  ...
	* Stopping node "multinode-20220604155719-5712"  ...
	* Stopping node "multinode-20220604155719-5712"  ...
	* Stopping node "multinode-20220604155719-5712"  ...
	* Stopping node "multinode-20220604155719-5712"  ...
	* Stopping node "multinode-20220604155719-5712"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:02:11.924240    5536 daemonize_windows.go:38] error terminating scheduled stop for profile multinode-20220604155719-5712: stopping schedule-stop service for profile multinode-20220604155719-5712: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20220604155719-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:314: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 stop": exit status 82
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status: exit status 7 (2.8079547s)

                                                
                                                
-- stdout --
	multinode-20220604155719-5712
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:02:31.559614    6872 status.go:258] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	E0604 16:02:31.559683    6872 status.go:261] The "multinode-20220604155719-5712" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --alsologtostderr: exit status 7 (2.7963786s)

                                                
                                                
-- stdout --
	multinode-20220604155719-5712
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:02:31.835043    5264 out.go:296] Setting OutFile to fd 824 ...
	I0604 16:02:31.906241    5264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:02:31.906241    5264 out.go:309] Setting ErrFile to fd 644...
	I0604 16:02:31.906241    5264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:02:31.916249    5264 out.go:303] Setting JSON to false
	I0604 16:02:31.916249    5264 mustload.go:65] Loading cluster: multinode-20220604155719-5712
	I0604 16:02:31.917250    5264 config.go:178] Loaded profile config "multinode-20220604155719-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:02:31.917250    5264 status.go:253] checking status of multinode-20220604155719-5712 ...
	I0604 16:02:31.930242    5264 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:02:34.356632    5264 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:02:34.356632    5264 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (2.4261991s)
	I0604 16:02:34.356991    5264 status.go:328] multinode-20220604155719-5712 host status = "" (err=state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	)
	I0604 16:02:34.357027    5264 status.go:255] multinode-20220604155719-5712 status: &{Name:multinode-20220604155719-5712 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0604 16:02:34.357091    5264 status.go:258] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	E0604 16:02:34.357091    5264 status.go:261] The "multinode-20220604155719-5712" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:331: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --alsologtostderr": multinode-20220604155719-5712
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:335: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-20220604155719-5712 status --alsologtostderr": multinode-20220604155719-5712
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.0819547s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.7606366s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:02:38.207604    3612 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (31.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (114.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:342: (dbg) Done: docker version -f {{.Server.Version}}: (1.0694095s)
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712 --wait=true -v=8 --alsologtostderr --driver=docker
multinode_test.go:352: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712 --wait=true -v=8 --alsologtostderr --driver=docker: exit status 60 (1m49.7594016s)

                                                
                                                
-- stdout --
	* [multinode-20220604155719-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20220604155719-5712 in cluster multinode-20220604155719-5712
	* Pulling base image ...
	* docker "multinode-20220604155719-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20220604155719-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:02:39.551870    8520 out.go:296] Setting OutFile to fd 380 ...
	I0604 16:02:39.615572    8520 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:02:39.615572    8520 out.go:309] Setting ErrFile to fd 916...
	I0604 16:02:39.615572    8520 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:02:39.627225    8520 out.go:303] Setting JSON to false
	I0604 16:02:39.629331    8520 start.go:115] hostinfo: {"hostname":"minikube2","uptime":9631,"bootTime":1654348928,"procs":146,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:02:39.629331    8520 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:02:39.634974    8520 out.go:177] * [multinode-20220604155719-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:02:39.637084    8520 notify.go:193] Checking for updates...
	I0604 16:02:39.639619    8520 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:02:39.641861    8520 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:02:39.644329    8520 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:02:39.646697    8520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:02:39.649055    8520 config.go:178] Loaded profile config "multinode-20220604155719-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:02:39.649823    8520 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:02:42.146726    8520 docker.go:137] docker version: linux-20.10.16
	I0604 16:02:42.156286    8520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:02:44.161738    8520 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0054309s)
	I0604 16:02:44.162751    8520 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:02:43.1457107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:02:44.170410    8520 out.go:177] * Using the docker driver based on existing profile
	I0604 16:02:44.173034    8520 start.go:284] selected driver: docker
	I0604 16:02:44.173034    8520 start.go:806] validating driver "docker" against &{Name:multinode-20220604155719-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220604155719-5712 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:02:44.173034    8520 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:02:44.193554    8520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:02:46.139691    8520 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9461159s)
	I0604 16:02:46.139691    8520 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:02:45.1930241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:02:46.243618    8520 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:02:46.243780    8520 cni.go:95] Creating CNI manager for ""
	I0604 16:02:46.243780    8520 cni.go:156] 1 nodes found, recommending kindnet
	I0604 16:02:46.243780    8520 start_flags.go:306] config:
	{Name:multinode-20220604155719-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220604155719-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false}
	I0604 16:02:46.249151    8520 out.go:177] * Starting control plane node multinode-20220604155719-5712 in cluster multinode-20220604155719-5712
	I0604 16:02:46.251909    8520 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:02:46.255233    8520 out.go:177] * Pulling base image ...
	I0604 16:02:46.257157    8520 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:02:46.257157    8520 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:02:46.257157    8520 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:02:46.257907    8520 cache.go:57] Caching tarball of preloaded images
	I0604 16:02:46.258147    8520 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:02:46.258147    8520 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:02:46.258147    8520 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220604155719-5712\config.json ...
	I0604 16:02:47.295833    8520 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:02:47.295833    8520 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:02:47.295833    8520 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:02:47.295833    8520 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:02:47.295833    8520 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:02:47.295833    8520 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:02:47.295833    8520 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:02:47.295833    8520 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:02:47.295833    8520 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:02:49.509080    8520 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-3318289877: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-3318289877: read-only file system"}
	I0604 16:02:49.509109    8520 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:02:49.509109    8520 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:02:49.509109    8520 start.go:352] acquiring machines lock for multinode-20220604155719-5712: {Name:mk7df06d9ba91b0f06c5e69474f69126e3a597c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:02:49.509109    8520 start.go:356] acquired machines lock for "multinode-20220604155719-5712" in 0s
	I0604 16:02:49.509109    8520 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:02:49.509109    8520 fix.go:55] fixHost starting: 
	I0604 16:02:49.526668    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:02:50.558108    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:02:50.558108    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0314298s)
	I0604 16:02:50.558108    8520 fix.go:103] recreateIfNeeded on multinode-20220604155719-5712: state= err=unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:02:50.558108    8520 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:02:50.561080    8520 out.go:177] * docker "multinode-20220604155719-5712" container is missing, will recreate.
	I0604 16:02:50.564741    8520 delete.go:124] DEMOLISHING multinode-20220604155719-5712 ...
	I0604 16:02:50.577198    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:02:51.622239    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:02:51.622345    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.045031s)
	W0604 16:02:51.622417    8520 stop.go:75] unable to get state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:02:51.622540    8520 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:02:51.637114    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:02:52.670010    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:02:52.670236    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0328537s)
	I0604 16:02:52.670402    8520 delete.go:82] Unable to get host status for multinode-20220604155719-5712, assuming it has already been deleted: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:02:52.681340    8520 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220604155719-5712
	W0604 16:02:53.730296    8520 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220604155719-5712 returned with exit code 1
	I0604 16:02:53.730296    8520 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220604155719-5712: (1.0486839s)
	I0604 16:02:53.730374    8520 kic.go:356] could not find the container multinode-20220604155719-5712 to remove it. will try anyways
	I0604 16:02:53.745289    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:02:54.775632    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:02:54.775632    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0303316s)
	W0604 16:02:54.775632    8520 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:02:54.783873    8520 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0"
	W0604 16:02:55.818064    8520 cli_runner.go:211] docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:02:55.818064    8520 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0": (1.0341801s)
	I0604 16:02:55.818064    8520 oci.go:625] error shutdown multinode-20220604155719-5712: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:02:56.828262    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:02:57.867122    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:02:57.867122    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0386297s)
	I0604 16:02:57.867122    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:02:57.867122    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:02:57.867122    8520 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:02:58.437188    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:02:59.449353    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:02:59.449431    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0109496s)
	I0604 16:02:59.449510    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:02:59.449558    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:02:59.449558    8520 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:00.542911    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:01.551688    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:01.551688    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0087666s)
	I0604 16:03:01.551688    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:01.551688    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:03:01.551688    8520 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:02.884328    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:03.912666    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:03.912666    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0283275s)
	I0604 16:03:03.912666    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:03.912666    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:03:03.912666    8520 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:05.515752    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:06.553758    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:06.553758    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0379955s)
	I0604 16:03:06.553758    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:06.553758    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:03:06.553758    8520 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:08.913841    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:09.920103    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:09.920340    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0062511s)
	I0604 16:03:09.920412    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:09.920454    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:03:09.920517    8520 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:14.436009    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:15.505985    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:15.506048    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0699182s)
	I0604 16:03:15.506166    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:15.506166    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:03:15.506244    8520 oci.go:88] couldn't shut down multinode-20220604155719-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	 
	I0604 16:03:15.514541    8520 cli_runner.go:164] Run: docker rm -f -v multinode-20220604155719-5712
	I0604 16:03:16.560098    8520 cli_runner.go:217] Completed: docker rm -f -v multinode-20220604155719-5712: (1.0455466s)
	I0604 16:03:16.567995    8520 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220604155719-5712
	W0604 16:03:17.568953    8520 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:17.568953    8520 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220604155719-5712: (1.0009478s)
	I0604 16:03:17.578599    8520 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:03:18.639268    8520 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:03:18.639268    8520 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0606575s)
	I0604 16:03:18.649962    8520 network_create.go:272] running [docker network inspect multinode-20220604155719-5712] to gather additional debugging logs...
	I0604 16:03:18.649962    8520 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712
	W0604 16:03:19.669972    8520 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:19.670030    8520 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712: (1.0199703s)
	I0604 16:03:19.670030    8520 network_create.go:275] error running [docker network inspect multinode-20220604155719-5712]: docker network inspect multinode-20220604155719-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220604155719-5712
	I0604 16:03:19.670083    8520 network_create.go:277] output of [docker network inspect multinode-20220604155719-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220604155719-5712
	
	** /stderr **
	W0604 16:03:19.670852    8520 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:03:19.670852    8520 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:03:20.674898    8520 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:03:20.680064    8520 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:03:20.680200    8520 start.go:165] libmachine.API.Create for "multinode-20220604155719-5712" (driver="docker")
	I0604 16:03:20.680200    8520 client.go:168] LocalClient.Create starting
	I0604 16:03:20.680909    8520 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:03:20.680909    8520 main.go:134] libmachine: Decoding PEM data...
	I0604 16:03:20.680909    8520 main.go:134] libmachine: Parsing certificate...
	I0604 16:03:20.680909    8520 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:03:20.680909    8520 main.go:134] libmachine: Decoding PEM data...
	I0604 16:03:20.680909    8520 main.go:134] libmachine: Parsing certificate...
	I0604 16:03:20.690232    8520 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:03:21.724950    8520 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:03:21.724950    8520 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0347069s)
	I0604 16:03:21.735321    8520 network_create.go:272] running [docker network inspect multinode-20220604155719-5712] to gather additional debugging logs...
	I0604 16:03:21.735321    8520 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712
	W0604 16:03:22.743839    8520 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:22.744043    8520 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712: (1.0083406s)
	I0604 16:03:22.744043    8520 network_create.go:275] error running [docker network inspect multinode-20220604155719-5712]: docker network inspect multinode-20220604155719-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220604155719-5712
	I0604 16:03:22.744111    8520 network_create.go:277] output of [docker network inspect multinode-20220604155719-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220604155719-5712
	
	** /stderr **
	I0604 16:03:22.751816    8520 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:03:23.766667    8520 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0147727s)
	I0604 16:03:23.786179    8520 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000063c0] misses:0}
	I0604 16:03:23.786179    8520 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:03:23.786179    8520 network_create.go:115] attempt to create docker network multinode-20220604155719-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:03:23.795436    8520 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712
	W0604 16:03:24.844667    8520 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:24.844881    8520 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: (1.0490584s)
	E0604 16:03:24.844965    8520 network_create.go:104] error while trying to create docker network multinode-20220604155719-5712 192.168.49.0/24: create docker network multinode-20220604155719-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 909af4528ff31c18c5af3bea1026fe2d94987f6c8116e32a4a4ba0c3ffb0b9ca (br-909af4528ff3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:03:24.844965    8520 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 909af4528ff31c18c5af3bea1026fe2d94987f6c8116e32a4a4ba0c3ffb0b9ca (br-909af4528ff3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 909af4528ff31c18c5af3bea1026fe2d94987f6c8116e32a4a4ba0c3ffb0b9ca (br-909af4528ff3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:03:24.859419    8520 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:03:25.892340    8520 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0329107s)
	I0604 16:03:25.900042    8520 cli_runner.go:164] Run: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:03:26.890566    8520 cli_runner.go:211] docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:03:26.890566    8520 client.go:171] LocalClient.Create took 6.2103009s
	I0604 16:03:28.914314    8520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:03:28.921380    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:03:29.945364    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:29.945364    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0239737s)
	I0604 16:03:29.945364    8520 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:30.131975    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:03:31.147046    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:31.147046    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0150602s)
	W0604 16:03:31.147046    8520 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:03:31.147046    8520 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:31.158136    8520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:03:31.163802    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:03:32.181019    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:32.181019    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0171681s)
	I0604 16:03:32.181299    8520 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:32.397351    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:03:33.423900    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:33.423900    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0265383s)
	W0604 16:03:33.423900    8520 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:03:33.423900    8520 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:33.423900    8520 start.go:134] duration metric: createHost completed in 12.7487017s
	I0604 16:03:33.434700    8520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:03:33.440542    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:03:34.472443    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:34.472443    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0318897s)
	I0604 16:03:34.472443    8520 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:34.818546    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:03:35.838365    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:35.838365    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0198082s)
	W0604 16:03:35.838365    8520 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:03:35.838365    8520 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:35.847402    8520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:03:35.853395    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:03:36.875064    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:36.875064    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0216588s)
	I0604 16:03:36.875064    8520 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:37.104009    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:03:38.128549    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:38.128749    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0233725s)
	W0604 16:03:38.128964    8520 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:03:38.129032    8520 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:38.129032    8520 fix.go:57] fixHost completed within 48.6194088s
	I0604 16:03:38.129099    8520 start.go:81] releasing machines lock for "multinode-20220604155719-5712", held for 48.6194763s
	W0604 16:03:38.129245    8520 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	W0604 16:03:38.129245    8520 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	I0604 16:03:38.129245    8520 start.go:614] Will try again in 5 seconds ...
	I0604 16:03:43.144140    8520 start.go:352] acquiring machines lock for multinode-20220604155719-5712: {Name:mk7df06d9ba91b0f06c5e69474f69126e3a597c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:03:43.144140    8520 start.go:356] acquired machines lock for "multinode-20220604155719-5712" in 0s
	I0604 16:03:43.144746    8520 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:03:43.144746    8520 fix.go:55] fixHost starting: 
	I0604 16:03:43.161397    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:44.178201    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:44.178230    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0164492s)
	I0604 16:03:44.178307    8520 fix.go:103] recreateIfNeeded on multinode-20220604155719-5712: state= err=unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:44.178355    8520 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:03:44.182973    8520 out.go:177] * docker "multinode-20220604155719-5712" container is missing, will recreate.
	I0604 16:03:44.185208    8520 delete.go:124] DEMOLISHING multinode-20220604155719-5712 ...
	I0604 16:03:44.197677    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:45.229563    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:45.229563    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0316704s)
	W0604 16:03:45.229563    8520 stop.go:75] unable to get state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:45.229563    8520 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:45.243505    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:46.261841    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:46.261991    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0183252s)
	I0604 16:03:46.262072    8520 delete.go:82] Unable to get host status for multinode-20220604155719-5712, assuming it has already been deleted: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:46.269749    8520 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220604155719-5712
	W0604 16:03:47.311172    8520 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220604155719-5712 returned with exit code 1
	I0604 16:03:47.311199    8520 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220604155719-5712: (1.0413647s)
	I0604 16:03:47.311296    8520 kic.go:356] could not find the container multinode-20220604155719-5712 to remove it. will try anyways
	I0604 16:03:47.318908    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:48.378094    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:48.378094    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0591746s)
	W0604 16:03:48.378094    8520 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:48.387789    8520 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0"
	W0604 16:03:49.408373    8520 cli_runner.go:211] docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:03:49.408522    8520 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0": (1.0205732s)
	I0604 16:03:49.408522    8520 oci.go:625] error shutdown multinode-20220604155719-5712: docker exec --privileged -t multinode-20220604155719-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:50.431098    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:51.469696    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:51.469821    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0375388s)
	I0604 16:03:51.469880    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:51.469880    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:03:51.469880    8520 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:51.971590    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:52.984711    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:52.984711    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0121834s)
	I0604 16:03:52.984711    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:52.984711    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:03:52.984711    8520 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:53.594312    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:54.622016    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:54.622016    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0276935s)
	I0604 16:03:54.622016    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:54.622016    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:03:54.622016    8520 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:55.529344    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:56.589689    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:56.589743    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0602246s)
	I0604 16:03:56.589832    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:56.590007    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:03:56.590007    8520 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:58.594356    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:03:59.618530    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:03:59.618530    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0241635s)
	I0604 16:03:59.618530    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:03:59.623503    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:03:59.623577    8520 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:04:01.453276    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:04:02.474406    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:04:02.474406    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0210537s)
	I0604 16:04:02.474406    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:04:02.474406    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:04:02.474406    8520 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:04:05.165176    8520 cli_runner.go:164] Run: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}
	W0604 16:04:06.190360    8520 cli_runner.go:211] docker container inspect multinode-20220604155719-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:04:06.190360    8520 cli_runner.go:217] Completed: docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: (1.0249718s)
	I0604 16:04:06.190444    8520 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:04:06.190594    8520 oci.go:639] temporary error: container multinode-20220604155719-5712 status is  but expect it to be exited
	I0604 16:04:06.190668    8520 oci.go:88] couldn't shut down multinode-20220604155719-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	 
	I0604 16:04:06.197498    8520 cli_runner.go:164] Run: docker rm -f -v multinode-20220604155719-5712
	I0604 16:04:07.272553    8520 cli_runner.go:217] Completed: docker rm -f -v multinode-20220604155719-5712: (1.0750438s)
	I0604 16:04:07.281344    8520 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220604155719-5712
	W0604 16:04:08.323374    8520 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220604155719-5712 returned with exit code 1
	I0604 16:04:08.323604    8520 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220604155719-5712: (1.0420192s)
	I0604 16:04:08.331510    8520 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:04:09.358517    8520 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:04:09.358517    8520 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.026996s)
	I0604 16:04:09.367509    8520 network_create.go:272] running [docker network inspect multinode-20220604155719-5712] to gather additional debugging logs...
	I0604 16:04:09.367509    8520 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712
	W0604 16:04:10.420166    8520 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 returned with exit code 1
	I0604 16:04:10.420166    8520 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712: (1.052646s)
	I0604 16:04:10.420166    8520 network_create.go:275] error running [docker network inspect multinode-20220604155719-5712]: docker network inspect multinode-20220604155719-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220604155719-5712
	I0604 16:04:10.420166    8520 network_create.go:277] output of [docker network inspect multinode-20220604155719-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220604155719-5712
	
	** /stderr **
	W0604 16:04:10.421187    8520 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:04:10.421187    8520 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:04:11.424337    8520 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:04:11.434732    8520 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:04:11.435341    8520 start.go:165] libmachine.API.Create for "multinode-20220604155719-5712" (driver="docker")
	I0604 16:04:11.435341    8520 client.go:168] LocalClient.Create starting
	I0604 16:04:11.435570    8520 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:04:11.436353    8520 main.go:134] libmachine: Decoding PEM data...
	I0604 16:04:11.436353    8520 main.go:134] libmachine: Parsing certificate...
	I0604 16:04:11.436464    8520 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:04:11.436464    8520 main.go:134] libmachine: Decoding PEM data...
	I0604 16:04:11.436464    8520 main.go:134] libmachine: Parsing certificate...
	I0604 16:04:11.448019    8520 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:04:12.440457    8520 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:04:12.448359    8520 network_create.go:272] running [docker network inspect multinode-20220604155719-5712] to gather additional debugging logs...
	I0604 16:04:12.448359    8520 cli_runner.go:164] Run: docker network inspect multinode-20220604155719-5712
	W0604 16:04:13.501369    8520 cli_runner.go:211] docker network inspect multinode-20220604155719-5712 returned with exit code 1
	I0604 16:04:13.501369    8520 cli_runner.go:217] Completed: docker network inspect multinode-20220604155719-5712: (1.052831s)
	I0604 16:04:13.501444    8520 network_create.go:275] error running [docker network inspect multinode-20220604155719-5712]: docker network inspect multinode-20220604155719-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220604155719-5712
	I0604 16:04:13.501444    8520 network_create.go:277] output of [docker network inspect multinode-20220604155719-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220604155719-5712
	
	** /stderr **
	I0604 16:04:13.504412    8520 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:04:14.518734    8520 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0143111s)
	I0604 16:04:14.536761    8520 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063c0] amended:false}} dirty:map[] misses:0}
	I0604 16:04:14.536761    8520 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:04:14.551461    8520 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000063c0] amended:true}} dirty:map[192.168.49.0:0xc0000063c0 192.168.58.0:0xc000a7f4e8] misses:0}
	I0604 16:04:14.551461    8520 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:04:14.551461    8520 network_create.go:115] attempt to create docker network multinode-20220604155719-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:04:14.569172    8520 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712
	W0604 16:04:15.582955    8520 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712 returned with exit code 1
	I0604 16:04:15.583017    8520 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: (1.0136877s)
	E0604 16:04:15.583044    8520 network_create.go:104] error while trying to create docker network multinode-20220604155719-5712 192.168.58.0/24: create docker network multinode-20220604155719-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8160c77b8a4445ae091881e7d6d3a9c1b257a461bad557505c479e9821f0daa7 (br-8160c77b8a44): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:04:15.583044    8520 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8160c77b8a4445ae091881e7d6d3a9c1b257a461bad557505c479e9821f0daa7 (br-8160c77b8a44): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8160c77b8a4445ae091881e7d6d3a9c1b257a461bad557505c479e9821f0daa7 (br-8160c77b8a44): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:04:15.598360    8520 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:04:16.624353    8520 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0259827s)
	I0604 16:04:16.631978    8520 cli_runner.go:164] Run: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:04:17.612019    8520 cli_runner.go:211] docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:04:17.612019    8520 client.go:171] LocalClient.Create took 6.1766124s
	I0604 16:04:19.623240    8520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:04:19.631354    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:04:20.624960    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:04:20.625188    8520 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:04:20.915374    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:04:21.937757    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:04:21.937757    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0221087s)
	W0604 16:04:21.937962    8520 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:04:21.938021    8520 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:04:21.947530    8520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:04:21.953605    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:04:22.955623    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:04:22.955623    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0017823s)
	I0604 16:04:22.955846    8520 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:04:23.175208    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:04:24.225669    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:04:24.225669    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0504506s)
	W0604 16:04:24.225669    8520 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:04:24.225669    8520 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:04:24.225669    8520 start.go:134] duration metric: createHost completed in 12.8009221s
	I0604 16:04:24.236406    8520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:04:24.242019    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:04:25.252554    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:04:25.252639    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0103806s)
	I0604 16:04:25.252890    8520 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:04:25.588753    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:04:26.596618    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:04:26.596879    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0078544s)
	W0604 16:04:26.596906    8520 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:04:26.596906    8520 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:04:26.607652    8520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:04:26.613499    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:04:27.627926    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:04:27.627926    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.014416s)
	I0604 16:04:27.627926    8520 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:04:27.996426    8520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712
	W0604 16:04:29.029699    8520 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712 returned with exit code 1
	I0604 16:04:29.029699    8520 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: (1.0332622s)
	W0604 16:04:29.029699    8520 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	W0604 16:04:29.029699    8520 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220604155719-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220604155719-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	I0604 16:04:29.029699    8520 fix.go:57] fixHost completed within 45.884466s
	I0604 16:04:29.029699    8520 start.go:81] releasing machines lock for "multinode-20220604155719-5712", held for 45.885072s
	W0604 16:04:29.029699    8520 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-20220604155719-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220604155719-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	I0604 16:04:29.036387    8520 out.go:177] 
	W0604 16:04:29.038346    8520 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712 container: docker volume create multinode-20220604155719-5712 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712: read-only file system
	
	W0604 16:04:29.038346    8520 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:04:29.038880    8520 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:04:29.044327    8520 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:354: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712 --wait=true -v=8 --alsologtostderr --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.0997537s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.7732708s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:04:33.128129    9164 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (114.92s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (162.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220604155719-5712
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712-m01 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712-m01 --driver=docker: exit status 60 (1m14.0232175s)

                                                
                                                
-- stdout --
	* [multinode-20220604155719-5712-m01] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node multinode-20220604155719-5712-m01 in cluster multinode-20220604155719-5712-m01
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "multinode-20220604155719-5712-m01" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:04:47.908428    5664 network_create.go:104] error while trying to create docker network multinode-20220604155719-5712-m01 192.168.49.0/24: create docker network multinode-20220604155719-5712-m01 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 04d6c7d0eff2ccb0e6f869f45fd1325dd06ac847208f93f1db936e0851202d40 (br-04d6c7d0eff2): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712-m01 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 04d6c7d0eff2ccb0e6f869f45fd1325dd06ac847208f93f1db936e0851202d40 (br-04d6c7d0eff2): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712-m01 container: docker volume create multinode-20220604155719-5712-m01 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712-m01 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712-m01: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712-m01': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712-m01: read-only file system
	
	E0604 16:05:34.274676    5664 network_create.go:104] error while trying to create docker network multinode-20220604155719-5712-m01 192.168.58.0/24: create docker network multinode-20220604155719-5712-m01 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 15ad3145200a14a53dd53c8bd0da4b879f02d2483598a0192334f7ee6a1236ba (br-15ad3145200a): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712-m01 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 15ad3145200a14a53dd53c8bd0da4b879f02d2483598a0192334f7ee6a1236ba (br-15ad3145200a): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220604155719-5712-m01" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712-m01 container: docker volume create multinode-20220604155719-5712-m01 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712-m01 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712-m01: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712-m01': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712-m01: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712-m01 container: docker volume create multinode-20220604155719-5712-m01 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712-m01 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712-m01: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712-m01': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712-m01: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712-m02 --driver=docker
multinode_test.go:458: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712-m02 --driver=docker: exit status 60 (1m13.3139444s)

                                                
                                                
-- stdout --
	* [multinode-20220604155719-5712-m02] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node multinode-20220604155719-5712-m02 in cluster multinode-20220604155719-5712-m02
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "multinode-20220604155719-5712-m02" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:06:01.669332     816 network_create.go:104] error while trying to create docker network multinode-20220604155719-5712-m02 192.168.49.0/24: create docker network multinode-20220604155719-5712-m02 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d1569088cd7e2b26085043187c5a47fadcf182b22c3f50f33773653006da774c (br-d1569088cd7e): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712-m02 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d1569088cd7e2b26085043187c5a47fadcf182b22c3f50f33773653006da774c (br-d1569088cd7e): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712-m02 container: docker volume create multinode-20220604155719-5712-m02 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712-m02 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712-m02: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712-m02': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712-m02: read-only file system
	
	E0604 16:06:47.441141     816 network_create.go:104] error while trying to create docker network multinode-20220604155719-5712-m02 192.168.58.0/24: create docker network multinode-20220604155719-5712-m02 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 668c54d43eca4ea01942829e6f60d8c3dfba4941f38e40fe8b200a8594314a06 (br-668c54d43eca): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220604155719-5712-m02 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220604155719-5712-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 668c54d43eca4ea01942829e6f60d8c3dfba4941f38e40fe8b200a8594314a06 (br-668c54d43eca): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220604155719-5712-m02" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712-m02 container: docker volume create multinode-20220604155719-5712-m02 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712-m02 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712-m02: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712-m02': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712-m02: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220604155719-5712-m02 container: docker volume create multinode-20220604155719-5712-m02 --label name.minikube.sigs.k8s.io=multinode-20220604155719-5712-m02 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220604155719-5712-m02: error while creating volume root path '/var/lib/docker/volumes/multinode-20220604155719-5712-m02': mkdir /var/lib/docker/volumes/multinode-20220604155719-5712-m02: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
multinode_test.go:460: failed to start profile. args "out/minikube-windows-amd64.exe start -p multinode-20220604155719-5712-m02 --driver=docker" : exit status 60
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220604155719-5712
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220604155719-5712: exit status 80 (3.0656394s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_24.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20220604155719-5712-m02
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20220604155719-5712-m02: (8.0591366s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220604155719-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220604155719-5712: exit status 1 (1.0897324s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220604155719-5712 -n multinode-20220604155719-5712: exit status 7 (2.7895693s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:07:15.842058    8448 status.go:247] status error: host: state: unknown state "multinode-20220604155719-5712": docker container inspect multinode-20220604155719-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220604155719-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220604155719-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (162.70s)

                                                
                                    
x
+
TestPreload (85.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220604160727-5712 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
preload_test.go:48: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p test-preload-20220604160727-5712 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: exit status 60 (1m13.7073501s)

                                                
                                                
-- stdout --
	* [test-preload-20220604160727-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node test-preload-20220604160727-5712 in cluster test-preload-20220604160727-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "test-preload-20220604160727-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:07:27.889048    8132 out.go:296] Setting OutFile to fd 848 ...
	I0604 16:07:27.948523    8132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:07:27.948523    8132 out.go:309] Setting ErrFile to fd 972...
	I0604 16:07:27.948523    8132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:07:27.958630    8132 out.go:303] Setting JSON to false
	I0604 16:07:27.960701    8132 start.go:115] hostinfo: {"hostname":"minikube2","uptime":9920,"bootTime":1654348927,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:07:27.960701    8132 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:07:27.971858    8132 out.go:177] * [test-preload-20220604160727-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:07:27.975438    8132 notify.go:193] Checking for updates...
	I0604 16:07:27.978093    8132 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:07:27.981343    8132 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:07:27.983497    8132 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:07:27.985963    8132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:07:27.990055    8132 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:07:27.990176    8132 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:07:30.456343    8132 docker.go:137] docker version: linux-20.10.16
	I0604 16:07:30.463399    8132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:07:32.369449    8132 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9060291s)
	I0604 16:07:32.370537    8132 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:07:31.4059376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:07:32.374416    8132 out.go:177] * Using the docker driver based on user configuration
	I0604 16:07:32.376572    8132 start.go:284] selected driver: docker
	I0604 16:07:32.376572    8132 start.go:806] validating driver "docker" against <nil>
	I0604 16:07:32.376572    8132 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:07:32.505660    8132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:07:34.446971    8132 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9412899s)
	I0604 16:07:34.447380    8132 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:07:33.4912288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:07:34.447674    8132 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:07:34.448448    8132 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:07:34.452526    8132 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:07:34.454964    8132 cni.go:95] Creating CNI manager for ""
	I0604 16:07:34.454964    8132 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:07:34.454964    8132 start_flags.go:306] config:
	{Name:test-preload-20220604160727-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220604160727-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:07:34.460640    8132 out.go:177] * Starting control plane node test-preload-20220604160727-5712 in cluster test-preload-20220604160727-5712
	I0604 16:07:34.464009    8132 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:07:34.465638    8132 out.go:177] * Pulling base image ...
	I0604 16:07:34.468130    8132 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0604 16:07:34.468130    8132 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:07:34.469904    8132 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\test-preload-20220604160727-5712\config.json ...
	I0604 16:07:34.469904    8132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0
	I0604 16:07:34.469904    8132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0
	I0604 16:07:34.469904    8132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns:1.6.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5
	I0604 16:07:34.469904    8132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1
	I0604 16:07:34.469904    8132 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\test-preload-20220604160727-5712\config.json: {Name:mka7a0955f82fbc83e0c6e554e1ba0bcefcd3ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:07:34.469904    8132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0604 16:07:34.469904    8132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0
	I0604 16:07:34.469904    8132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0
	I0604 16:07:34.469904    8132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0
	I0604 16:07:34.627223    8132 cache.go:107] acquiring lock: {Name:mkef9a3d9e3cbb1fe114c12bec441ddb11fca0c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:07:34.627755    8132 cache.go:107] acquiring lock: {Name:mkb269f15b2e3b2569308dbf84de26df267b2fcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:07:34.628105    8132 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0604 16:07:34.628303    8132 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0604 16:07:34.630671    8132 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:07:34.631058    8132 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0604 16:07:34.631296    8132 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 161.3903ms
	I0604 16:07:34.631379    8132 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0604 16:07:34.640050    8132 cache.go:107] acquiring lock: {Name:mk965b06109155c0e187b8b69e2b0548d9bccb3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:07:34.640050    8132 cache.go:107] acquiring lock: {Name:mk7af4d324ae5378e4084d0d909beff30d29e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:07:34.640050    8132 cache.go:107] acquiring lock: {Name:mkef49659bc6e08b20a8521eb6ce4fb712ad39c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:07:34.640050    8132 cache.go:107] acquiring lock: {Name:mkfe379c4c474168d5a5fd2dde0e9bf1347e993b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:07:34.640050    8132 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0604 16:07:34.640050    8132 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0604 16:07:34.640814    8132 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0604 16:07:34.640856    8132 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0604 16:07:34.648087    8132 cache.go:107] acquiring lock: {Name:mk2bed4c2f349144087ca9b4676d08589a5f3b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:07:34.648684    8132 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0604 16:07:34.670906    8132 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error response from daemon: reference does not exist
	I0604 16:07:34.677941    8132 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error response from daemon: reference does not exist
	I0604 16:07:34.681861    8132 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error response from daemon: reference does not exist
	I0604 16:07:34.690342    8132 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0604 16:07:34.702088    8132 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error response from daemon: reference does not exist
	I0604 16:07:34.707845    8132 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
	I0604 16:07:34.718444    8132 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
	W0604 16:07:34.941951    8132 image.go:190] authn lookup for k8s.gcr.io/kube-controller-manager:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0604 16:07:35.182391    8132 image.go:190] authn lookup for k8s.gcr.io/kube-scheduler:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0604 16:07:35.425211    8132 image.go:190] authn lookup for k8s.gcr.io/kube-proxy:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0604 16:07:35.622041    8132 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:07:35.622041    8132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:07:35.622041    8132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:07:35.622560    8132 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:07:35.622698    8132 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:07:35.622698    8132 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:07:35.622698    8132 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:07:35.622698    8132 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:07:35.622698    8132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:07:35.626817    8132 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0
	W0604 16:07:35.688727    8132 image.go:190] authn lookup for k8s.gcr.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0604 16:07:35.765029    8132 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0
	I0604 16:07:35.765029    8132 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0
	W0604 16:07:35.948414    8132 image.go:190] authn lookup for k8s.gcr.io/kube-apiserver:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0604 16:07:36.113334    8132 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0
	W0604 16:07:36.191629    8132 image.go:190] authn lookup for k8s.gcr.io/coredns:1.6.5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0604 16:07:36.245027    8132 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0
	W0604 16:07:36.459200    8132 image.go:190] authn lookup for k8s.gcr.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0604 16:07:36.602466    8132 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5
	I0604 16:07:36.780597    8132 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1
	I0604 16:07:36.850453    8132 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1 exists
	I0604 16:07:36.850453    8132 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.1" took 2.3805236s
	I0604 16:07:36.853387    8132 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1 succeeded
	I0604 16:07:36.889878    8132 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0 exists
	I0604 16:07:36.890586    8132 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.17.0" took 2.4206564s
	I0604 16:07:36.890586    8132 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0 succeeded
	I0604 16:07:37.159457    8132 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5 exists
	I0604 16:07:37.163879    8132 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns_1.6.5" took 2.6939462s
	I0604 16:07:37.164033    8132 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5 succeeded
	I0604 16:07:37.344918    8132 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0 exists
	I0604 16:07:37.355006    8132 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.17.0" took 2.8850235s
	I0604 16:07:37.355006    8132 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0 succeeded
	I0604 16:07:37.542134    8132 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0 exists
	I0604 16:07:37.550785    8132 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.17.0" took 3.0802338s
	I0604 16:07:37.550889    8132 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0 succeeded
	I0604 16:07:37.829591    8132 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0 exists
	I0604 16:07:37.829591    8132 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.17.0" took 3.3596506s
	I0604 16:07:37.831150    8132 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0 succeeded
	I0604 16:07:37.896212    8132 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 exists
	I0604 16:07:37.905537    8132 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.4.3-0" took 3.4262709s
	I0604 16:07:37.905537    8132 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 succeeded
	I0604 16:07:37.905537    8132 cache.go:87] Successfully saved all images to host disk.
	I0604 16:07:37.970306    8132 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:07:38.025696    8132 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:07:38.025696    8132 start.go:352] acquiring machines lock for test-preload-20220604160727-5712: {Name:mk7ff8fa92e4969feb455fb08041c97815d63370 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:07:38.025696    8132 start.go:356] acquired machines lock for "test-preload-20220604160727-5712" in 0s
	I0604 16:07:38.026378    8132 start.go:91] Provisioning new machine with config: &{Name:test-preload-20220604160727-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220604160727-5712 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:07:38.026485    8132 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:07:38.029873    8132 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:07:38.030282    8132 start.go:165] libmachine.API.Create for "test-preload-20220604160727-5712" (driver="docker")
	I0604 16:07:38.030388    8132 client.go:168] LocalClient.Create starting
	I0604 16:07:38.031019    8132 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:07:38.031387    8132 main.go:134] libmachine: Decoding PEM data...
	I0604 16:07:38.031387    8132 main.go:134] libmachine: Parsing certificate...
	I0604 16:07:38.031683    8132 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:07:38.031867    8132 main.go:134] libmachine: Decoding PEM data...
	I0604 16:07:38.031957    8132 main.go:134] libmachine: Parsing certificate...
	I0604 16:07:38.039734    8132 cli_runner.go:164] Run: docker network inspect test-preload-20220604160727-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:07:39.081040    8132 cli_runner.go:211] docker network inspect test-preload-20220604160727-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:07:39.081121    8132 cli_runner.go:217] Completed: docker network inspect test-preload-20220604160727-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0411356s)
	I0604 16:07:39.087913    8132 network_create.go:272] running [docker network inspect test-preload-20220604160727-5712] to gather additional debugging logs...
	I0604 16:07:39.087913    8132 cli_runner.go:164] Run: docker network inspect test-preload-20220604160727-5712
	W0604 16:07:40.094923    8132 cli_runner.go:211] docker network inspect test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:07:40.094988    8132 cli_runner.go:217] Completed: docker network inspect test-preload-20220604160727-5712: (1.006785s)
	I0604 16:07:40.094988    8132 network_create.go:275] error running [docker network inspect test-preload-20220604160727-5712]: docker network inspect test-preload-20220604160727-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220604160727-5712
	I0604 16:07:40.094988    8132 network_create.go:277] output of [docker network inspect test-preload-20220604160727-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220604160727-5712
	
	** /stderr **
	I0604 16:07:40.103105    8132 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:07:41.110136    8132 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0070196s)
	I0604 16:07:41.125635    8132 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc001ac0048] misses:0}
	I0604 16:07:41.131222    8132 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:07:41.131222    8132 network_create.go:115] attempt to create docker network test-preload-20220604160727-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:07:41.131520    8132 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220604160727-5712
	W0604 16:07:42.154272    8132 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:07:42.154465    8132 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220604160727-5712: (1.0226945s)
	E0604 16:07:42.154465    8132 network_create.go:104] error while trying to create docker network test-preload-20220604160727-5712 192.168.49.0/24: create docker network test-preload-20220604160727-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220604160727-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3bcddca11b567f8ce505b63b9eee59646788f6aea06a6ab8ee757635a44c6d9d (br-3bcddca11b56): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:07:42.154465    8132 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220604160727-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220604160727-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3bcddca11b567f8ce505b63b9eee59646788f6aea06a6ab8ee757635a44c6d9d (br-3bcddca11b56): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220604160727-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220604160727-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3bcddca11b567f8ce505b63b9eee59646788f6aea06a6ab8ee757635a44c6d9d (br-3bcddca11b56): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:07:42.169029    8132 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:07:43.204672    8132 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0355311s)
	I0604 16:07:43.213155    8132 cli_runner.go:164] Run: docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:07:44.218667    8132 cli_runner.go:211] docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:07:44.218769    8132 cli_runner.go:217] Completed: docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0055019s)
	I0604 16:07:44.218769    8132 client.go:171] LocalClient.Create took 6.1883148s
	I0604 16:07:46.236973    8132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:07:46.244909    8132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712
	W0604 16:07:47.279540    8132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:07:47.279578    8132 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: (1.0343966s)
	I0604 16:07:47.279578    8132 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:07:47.570226    8132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712
	W0604 16:07:48.590208    8132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:07:48.590322    8132 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: (1.0197629s)
	W0604 16:07:48.590322    8132 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	
	W0604 16:07:48.590322    8132 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:07:48.601599    8132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:07:48.606746    8132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712
	W0604 16:07:49.608492    8132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:07:49.608492    8132 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: (1.0015285s)
	I0604 16:07:49.608693    8132 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:07:49.914925    8132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712
	W0604 16:07:50.895256    8132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712 returned with exit code 1
	W0604 16:07:50.895256    8132 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	
	W0604 16:07:50.895256    8132 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:07:50.895256    8132 start.go:134] duration metric: createHost completed in 12.868633s
	I0604 16:07:50.895256    8132 start.go:81] releasing machines lock for "test-preload-20220604160727-5712", held for 12.8694229s
	W0604 16:07:50.895256    8132 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for test-preload-20220604160727-5712 container: docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220604160727-5712: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220604160727-5712': mkdir /var/lib/docker/volumes/test-preload-20220604160727-5712: read-only file system
	I0604 16:07:50.908726    8132 cli_runner.go:164] Run: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}
	W0604 16:07:51.935495    8132 cli_runner.go:211] docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:07:51.935495    8132 cli_runner.go:217] Completed: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: (1.0267581s)
	I0604 16:07:51.935495    8132 delete.go:82] Unable to get host status for test-preload-20220604160727-5712, assuming it has already been deleted: state: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	W0604 16:07:51.935495    8132 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for test-preload-20220604160727-5712 container: docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220604160727-5712: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220604160727-5712': mkdir /var/lib/docker/volumes/test-preload-20220604160727-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for test-preload-20220604160727-5712 container: docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220604160727-5712: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220604160727-5712': mkdir /var/lib/docker/volumes/test-preload-20220604160727-5712: read-only file system
	
	I0604 16:07:51.935495    8132 start.go:614] Will try again in 5 seconds ...
	I0604 16:07:56.936079    8132 start.go:352] acquiring machines lock for test-preload-20220604160727-5712: {Name:mk7ff8fa92e4969feb455fb08041c97815d63370 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:07:56.936476    8132 start.go:356] acquired machines lock for "test-preload-20220604160727-5712" in 396.4µs
	I0604 16:07:56.936699    8132 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:07:56.936804    8132 fix.go:55] fixHost starting: 
	I0604 16:07:56.948922    8132 cli_runner.go:164] Run: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}
	W0604 16:07:57.933813    8132 cli_runner.go:211] docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:07:57.934032    8132 fix.go:103] recreateIfNeeded on test-preload-20220604160727-5712: state= err=unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:07:57.934032    8132 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:07:57.947500    8132 out.go:177] * docker "test-preload-20220604160727-5712" container is missing, will recreate.
	I0604 16:07:57.952198    8132 delete.go:124] DEMOLISHING test-preload-20220604160727-5712 ...
	I0604 16:07:57.967836    8132 cli_runner.go:164] Run: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}
	W0604 16:07:58.971567    8132 cli_runner.go:211] docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:07:58.971567    8132 cli_runner.go:217] Completed: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: (1.0036111s)
	W0604 16:07:58.971676    8132 stop.go:75] unable to get state: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:07:58.971676    8132 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:07:58.987421    8132 cli_runner.go:164] Run: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}
	W0604 16:08:00.011486    8132 cli_runner.go:211] docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:08:00.011567    8132 cli_runner.go:217] Completed: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: (1.0238646s)
	I0604 16:08:00.011567    8132 delete.go:82] Unable to get host status for test-preload-20220604160727-5712, assuming it has already been deleted: state: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:00.020213    8132 cli_runner.go:164] Run: docker container inspect -f {{.Id}} test-preload-20220604160727-5712
	W0604 16:08:01.019124    8132 cli_runner.go:211] docker container inspect -f {{.Id}} test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:08:01.019124    8132 kic.go:356] could not find the container test-preload-20220604160727-5712 to remove it. will try anyways
	I0604 16:08:01.026567    8132 cli_runner.go:164] Run: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}
	W0604 16:08:02.012959    8132 cli_runner.go:211] docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}} returned with exit code 1
	W0604 16:08:02.012959    8132 oci.go:84] error getting container status, will try to delete anyways: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:02.020551    8132 cli_runner.go:164] Run: docker exec --privileged -t test-preload-20220604160727-5712 /bin/bash -c "sudo init 0"
	W0604 16:08:03.060544    8132 cli_runner.go:211] docker exec --privileged -t test-preload-20220604160727-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:08:03.060544    8132 cli_runner.go:217] Completed: docker exec --privileged -t test-preload-20220604160727-5712 /bin/bash -c "sudo init 0": (1.0397758s)
	I0604 16:08:03.060633    8132 oci.go:625] error shutdown test-preload-20220604160727-5712: docker exec --privileged -t test-preload-20220604160727-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:04.079302    8132 cli_runner.go:164] Run: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}
	W0604 16:08:05.106413    8132 cli_runner.go:211] docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:08:05.106473    8132 cli_runner.go:217] Completed: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: (1.0269686s)
	I0604 16:08:05.106534    8132 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:05.106534    8132 oci.go:639] temporary error: container test-preload-20220604160727-5712 status is  but expect it to be exited
	I0604 16:08:05.106534    8132 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:05.586240    8132 cli_runner.go:164] Run: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}
	W0604 16:08:06.621586    8132 cli_runner.go:211] docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:08:06.621586    8132 cli_runner.go:217] Completed: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: (1.0350508s)
	I0604 16:08:06.621586    8132 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:06.621586    8132 oci.go:639] temporary error: container test-preload-20220604160727-5712 status is  but expect it to be exited
	I0604 16:08:06.621586    8132 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:07.518021    8132 cli_runner.go:164] Run: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}
	W0604 16:08:08.538048    8132 cli_runner.go:211] docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:08:08.538132    8132 cli_runner.go:217] Completed: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: (1.0195217s)
	I0604 16:08:08.538199    8132 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:08.538264    8132 oci.go:639] temporary error: container test-preload-20220604160727-5712 status is  but expect it to be exited
	I0604 16:08:08.538264    8132 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:09.197606    8132 cli_runner.go:164] Run: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}
	W0604 16:08:10.215044    8132 cli_runner.go:211] docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:08:10.215262    8132 cli_runner.go:217] Completed: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: (1.017427s)
	I0604 16:08:10.215361    8132 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:10.215361    8132 oci.go:639] temporary error: container test-preload-20220604160727-5712 status is  but expect it to be exited
	I0604 16:08:10.215428    8132 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:11.336920    8132 cli_runner.go:164] Run: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}
	W0604 16:08:12.332322    8132 cli_runner.go:211] docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:08:12.332534    8132 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:12.332645    8132 oci.go:639] temporary error: container test-preload-20220604160727-5712 status is  but expect it to be exited
	I0604 16:08:12.332645    8132 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:13.862212    8132 cli_runner.go:164] Run: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}
	W0604 16:08:14.901333    8132 cli_runner.go:211] docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:08:14.901507    8132 cli_runner.go:217] Completed: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: (1.0389397s)
	I0604 16:08:14.901630    8132 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:14.901630    8132 oci.go:639] temporary error: container test-preload-20220604160727-5712 status is  but expect it to be exited
	I0604 16:08:14.901630    8132 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:17.952058    8132 cli_runner.go:164] Run: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}
	W0604 16:08:18.962800    8132 cli_runner.go:211] docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:08:18.962833    8132 cli_runner.go:217] Completed: docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: (1.0105244s)
	I0604 16:08:18.962984    8132 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:18.962984    8132 oci.go:639] temporary error: container test-preload-20220604160727-5712 status is  but expect it to be exited
	I0604 16:08:18.962984    8132 oci.go:88] couldn't shut down test-preload-20220604160727-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	 
	I0604 16:08:18.970048    8132 cli_runner.go:164] Run: docker rm -f -v test-preload-20220604160727-5712
	I0604 16:08:19.975817    8132 cli_runner.go:217] Completed: docker rm -f -v test-preload-20220604160727-5712: (1.0055384s)
	I0604 16:08:19.983402    8132 cli_runner.go:164] Run: docker container inspect -f {{.Id}} test-preload-20220604160727-5712
	W0604 16:08:20.978535    8132 cli_runner.go:211] docker container inspect -f {{.Id}} test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:08:20.986893    8132 cli_runner.go:164] Run: docker network inspect test-preload-20220604160727-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:08:22.001539    8132 cli_runner.go:211] docker network inspect test-preload-20220604160727-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:08:22.001539    8132 cli_runner.go:217] Completed: docker network inspect test-preload-20220604160727-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0144056s)
	I0604 16:08:22.010848    8132 network_create.go:272] running [docker network inspect test-preload-20220604160727-5712] to gather additional debugging logs...
	I0604 16:08:22.010936    8132 cli_runner.go:164] Run: docker network inspect test-preload-20220604160727-5712
	W0604 16:08:23.003639    8132 cli_runner.go:211] docker network inspect test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:08:23.003639    8132 network_create.go:275] error running [docker network inspect test-preload-20220604160727-5712]: docker network inspect test-preload-20220604160727-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220604160727-5712
	I0604 16:08:23.003858    8132 network_create.go:277] output of [docker network inspect test-preload-20220604160727-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220604160727-5712
	
	** /stderr **
	W0604 16:08:23.004708    8132 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:08:23.004708    8132 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:08:24.023563    8132 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:08:24.028048    8132 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:08:24.028048    8132 start.go:165] libmachine.API.Create for "test-preload-20220604160727-5712" (driver="docker")
	I0604 16:08:24.028048    8132 client.go:168] LocalClient.Create starting
	I0604 16:08:24.028750    8132 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:08:24.028750    8132 main.go:134] libmachine: Decoding PEM data...
	I0604 16:08:24.028750    8132 main.go:134] libmachine: Parsing certificate...
	I0604 16:08:24.029475    8132 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:08:24.029475    8132 main.go:134] libmachine: Decoding PEM data...
	I0604 16:08:24.029475    8132 main.go:134] libmachine: Parsing certificate...
	I0604 16:08:24.039720    8132 cli_runner.go:164] Run: docker network inspect test-preload-20220604160727-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:08:25.036106    8132 cli_runner.go:211] docker network inspect test-preload-20220604160727-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:08:25.043372    8132 network_create.go:272] running [docker network inspect test-preload-20220604160727-5712] to gather additional debugging logs...
	I0604 16:08:25.043372    8132 cli_runner.go:164] Run: docker network inspect test-preload-20220604160727-5712
	W0604 16:08:26.048632    8132 cli_runner.go:211] docker network inspect test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:08:26.048632    8132 cli_runner.go:217] Completed: docker network inspect test-preload-20220604160727-5712: (1.0051037s)
	I0604 16:08:26.048632    8132 network_create.go:275] error running [docker network inspect test-preload-20220604160727-5712]: docker network inspect test-preload-20220604160727-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220604160727-5712
	I0604 16:08:26.048730    8132 network_create.go:277] output of [docker network inspect test-preload-20220604160727-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220604160727-5712
	
	** /stderr **
	I0604 16:08:26.056544    8132 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:08:27.064714    8132 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0079708s)
	I0604 16:08:27.083486    8132 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc001ac0048] amended:false}} dirty:map[] misses:0}
	I0604 16:08:27.083486    8132 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:08:27.095808    8132 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc001ac0048] amended:true}} dirty:map[192.168.49.0:0xc001ac0048 192.168.58.0:0xc000642a28] misses:0}
	I0604 16:08:27.095808    8132 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:08:27.095808    8132 network_create.go:115] attempt to create docker network test-preload-20220604160727-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:08:27.106191    8132 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220604160727-5712
	W0604 16:08:28.117246    8132 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:08:28.117461    8132 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220604160727-5712: (1.0109193s)
	E0604 16:08:28.117489    8132 network_create.go:104] error while trying to create docker network test-preload-20220604160727-5712 192.168.58.0/24: create docker network test-preload-20220604160727-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220604160727-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 055c5aecb842640ee854896aa05a1474a7016ba449849d8a7207b5f534af36c5 (br-055c5aecb842): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:08:28.117832    8132 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220604160727-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220604160727-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 055c5aecb842640ee854896aa05a1474a7016ba449849d8a7207b5f534af36c5 (br-055c5aecb842): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220604160727-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220604160727-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 055c5aecb842640ee854896aa05a1474a7016ba449849d8a7207b5f534af36c5 (br-055c5aecb842): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:08:28.129712    8132 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:08:29.168473    8132 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0384971s)
	I0604 16:08:29.176131    8132 cli_runner.go:164] Run: docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:08:30.177831    8132 cli_runner.go:211] docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:08:30.177922    8132 cli_runner.go:217] Completed: docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0014751s)
	I0604 16:08:30.177966    8132 client.go:171] LocalClient.Create took 6.1498522s
	I0604 16:08:32.193491    8132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:08:32.201828    8132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712
	W0604 16:08:33.189591    8132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:08:33.189591    8132 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:33.527906    8132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712
	W0604 16:08:34.516163    8132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712 returned with exit code 1
	W0604 16:08:34.516163    8132 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	
	W0604 16:08:34.516163    8132 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:34.527056    8132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:08:34.532387    8132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712
	W0604 16:08:35.519586    8132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:08:35.519735    8132 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:35.760608    8132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712
	W0604 16:08:36.761509    8132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:08:36.761509    8132 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: (1.0008898s)
	W0604 16:08:36.762084    8132 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	
	W0604 16:08:36.762140    8132 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:36.762140    8132 start.go:134] duration metric: createHost completed in 12.7380918s
	I0604 16:08:36.772031    8132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:08:36.775082    8132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712
	W0604 16:08:37.790632    8132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:08:37.790706    8132 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: (1.0152688s)
	I0604 16:08:37.790806    8132 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:38.056147    8132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712
	W0604 16:08:39.069414    8132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:08:39.069447    8132 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: (1.0130188s)
	W0604 16:08:39.069447    8132 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	
	W0604 16:08:39.069447    8132 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:39.079198    8132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:08:39.085474    8132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712
	W0604 16:08:40.095248    8132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:08:40.095248    8132 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: (1.0097637s)
	I0604 16:08:40.095604    8132 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:40.307299    8132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712
	W0604 16:08:41.322158    8132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712 returned with exit code 1
	I0604 16:08:41.322158    8132 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: (1.0145606s)
	W0604 16:08:41.322298    8132 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	
	W0604 16:08:41.322395    8132 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220604160727-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220604160727-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712
	I0604 16:08:41.322515    8132 fix.go:57] fixHost completed within 44.3853398s
	I0604 16:08:41.322515    8132 start.go:81] releasing machines lock for "test-preload-20220604160727-5712", held for 44.3854766s
	W0604 16:08:41.323032    8132 out.go:239] * Failed to start docker container. Running "minikube delete -p test-preload-20220604160727-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220604160727-5712 container: docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220604160727-5712: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220604160727-5712': mkdir /var/lib/docker/volumes/test-preload-20220604160727-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p test-preload-20220604160727-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220604160727-5712 container: docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220604160727-5712: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220604160727-5712': mkdir /var/lib/docker/volumes/test-preload-20220604160727-5712: read-only file system
	
	I0604 16:08:41.329381    8132 out.go:177] 
	W0604 16:08:41.331610    8132 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220604160727-5712 container: docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220604160727-5712: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220604160727-5712': mkdir /var/lib/docker/volumes/test-preload-20220604160727-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220604160727-5712 container: docker volume create test-preload-20220604160727-5712 --label name.minikube.sigs.k8s.io=test-preload-20220604160727-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220604160727-5712: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220604160727-5712': mkdir /var/lib/docker/volumes/test-preload-20220604160727-5712: read-only file system
	
	W0604 16:08:41.331610    8132 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:08:41.331610    8132 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:08:41.337460    8132 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-windows-amd64.exe start -p test-preload-20220604160727-5712 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0 failed: exit status 60
panic.go:482: *** TestPreload FAILED at 2022-06-04 16:08:41.4437572 +0000 GMT m=+2926.640523401
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220604160727-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect test-preload-20220604160727-5712: exit status 1 (1.112725s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: test-preload-20220604160727-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-20220604160727-5712 -n test-preload-20220604160727-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-20220604160727-5712 -n test-preload-20220604160727-5712: exit status 7 (2.7132422s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:08:45.263363    3756 status.go:247] status error: host: state: unknown state "test-preload-20220604160727-5712": docker container inspect test-preload-20220604160727-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220604160727-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-20220604160727-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "test-preload-20220604160727-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20220604160727-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20220604160727-5712: (7.830818s)
--- FAIL: TestPreload (85.44s)

                                                
                                    
x
+
TestScheduledStopWindows (85.28s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20220604160853-5712 --memory=2048 --driver=docker
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p scheduled-stop-20220604160853-5712 --memory=2048 --driver=docker: exit status 60 (1m13.5160073s)

                                                
                                                
-- stdout --
	* [scheduled-stop-20220604160853-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node scheduled-stop-20220604160853-5712 in cluster scheduled-stop-20220604160853-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20220604160853-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:09:07.270516    6316 network_create.go:104] error while trying to create docker network scheduled-stop-20220604160853-5712 192.168.49.0/24: create docker network scheduled-stop-20220604160853-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220604160853-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ea0f8d88c80cd464ec78bfe147a7f38333fb43c8d264c9f753697137107d8db6 (br-ea0f8d88c80c): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220604160853-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220604160853-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ea0f8d88c80cd464ec78bfe147a7f38333fb43c8d264c9f753697137107d8db6 (br-ea0f8d88c80c): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220604160853-5712 container: docker volume create scheduled-stop-20220604160853-5712 --label name.minikube.sigs.k8s.io=scheduled-stop-20220604160853-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220604160853-5712: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220604160853-5712': mkdir /var/lib/docker/volumes/scheduled-stop-20220604160853-5712: read-only file system
	
	E0604 16:09:53.203573    6316 network_create.go:104] error while trying to create docker network scheduled-stop-20220604160853-5712 192.168.58.0/24: create docker network scheduled-stop-20220604160853-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220604160853-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d43f294718c83d3994b5006db843bf470a136f0c5b6897b1acc86aa9b18ef929 (br-d43f294718c8): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220604160853-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220604160853-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d43f294718c83d3994b5006db843bf470a136f0c5b6897b1acc86aa9b18ef929 (br-d43f294718c8): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20220604160853-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220604160853-5712 container: docker volume create scheduled-stop-20220604160853-5712 --label name.minikube.sigs.k8s.io=scheduled-stop-20220604160853-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220604160853-5712: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220604160853-5712': mkdir /var/lib/docker/volumes/scheduled-stop-20220604160853-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220604160853-5712 container: docker volume create scheduled-stop-20220604160853-5712 --label name.minikube.sigs.k8s.io=scheduled-stop-20220604160853-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220604160853-5712: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220604160853-5712': mkdir /var/lib/docker/volumes/scheduled-stop-20220604160853-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 60

                                                
                                                
-- stdout --
	* [scheduled-stop-20220604160853-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node scheduled-stop-20220604160853-5712 in cluster scheduled-stop-20220604160853-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20220604160853-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:09:07.270516    6316 network_create.go:104] error while trying to create docker network scheduled-stop-20220604160853-5712 192.168.49.0/24: create docker network scheduled-stop-20220604160853-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220604160853-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ea0f8d88c80cd464ec78bfe147a7f38333fb43c8d264c9f753697137107d8db6 (br-ea0f8d88c80c): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220604160853-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220604160853-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ea0f8d88c80cd464ec78bfe147a7f38333fb43c8d264c9f753697137107d8db6 (br-ea0f8d88c80c): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220604160853-5712 container: docker volume create scheduled-stop-20220604160853-5712 --label name.minikube.sigs.k8s.io=scheduled-stop-20220604160853-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220604160853-5712: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220604160853-5712': mkdir /var/lib/docker/volumes/scheduled-stop-20220604160853-5712: read-only file system
	
	E0604 16:09:53.203573    6316 network_create.go:104] error while trying to create docker network scheduled-stop-20220604160853-5712 192.168.58.0/24: create docker network scheduled-stop-20220604160853-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220604160853-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d43f294718c83d3994b5006db843bf470a136f0c5b6897b1acc86aa9b18ef929 (br-d43f294718c8): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220604160853-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220604160853-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d43f294718c83d3994b5006db843bf470a136f0c5b6897b1acc86aa9b18ef929 (br-d43f294718c8): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20220604160853-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220604160853-5712 container: docker volume create scheduled-stop-20220604160853-5712 --label name.minikube.sigs.k8s.io=scheduled-stop-20220604160853-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220604160853-5712: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220604160853-5712': mkdir /var/lib/docker/volumes/scheduled-stop-20220604160853-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220604160853-5712 container: docker volume create scheduled-stop-20220604160853-5712 --label name.minikube.sigs.k8s.io=scheduled-stop-20220604160853-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220604160853-5712: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220604160853-5712': mkdir /var/lib/docker/volumes/scheduled-stop-20220604160853-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
panic.go:482: *** TestScheduledStopWindows FAILED at 2022-06-04 16:10:06.6289507 +0000 GMT m=+3011.824796901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopWindows]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-20220604160853-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect scheduled-stop-20220604160853-5712: exit status 1 (1.0765068s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: scheduled-stop-20220604160853-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220604160853-5712 -n scheduled-stop-20220604160853-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220604160853-5712 -n scheduled-stop-20220604160853-5712: exit status 7 (2.7173394s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:10:10.402511    2104 status.go:247] status error: host: state: unknown state "scheduled-stop-20220604160853-5712": docker container inspect scheduled-stop-20220604160853-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: scheduled-stop-20220604160853-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-20220604160853-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-20220604160853-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20220604160853-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20220604160853-5712: (7.9587679s)
--- FAIL: TestScheduledStopWindows (85.28s)

                                                
                                    
x
+
TestInsufficientStorage (28.85s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20220604161018-5712 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20220604161018-5712 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (18.163423s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4b1275be-cbfc-4940-81ae-ef70331f1400","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220604161018-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"24bba52d-21eb-4b75-bd6a-928eec07fbfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"1bb2fadb-31b8-44b0-9798-7813612dd462","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"655b8475-7419-4eee-8829-571da1186352","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14123"}}
	{"specversion":"1.0","id":"2384e6d6-46cb-46ff-b6c2-67d8c36ae2e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b8d4941d-b5cf-48ae-907c-2dfd463a554a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"9acd92bc-7b75-45b5-b9fe-eb41dcab371b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"58b6c893-8d98-49a8-9b64-f2b380e25778","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"52fefbe7-5f95-4b2f-8cf8-3cf808f4983c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"7bb305ee-77a3-4e0b-a2f6-3b16572b4fe7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220604161018-5712 in cluster insufficient-storage-20220604161018-5712","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"24a8c064-5d82-4f47-93dc-243a202389e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"516329da-16e8-425b-963f-8661bcaeae38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb3b5848-3b26-4ca8-9aa7-578385bd5746","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network insufficient-storage-20220604161018-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true insufficient-storage-20220604161018-5712: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 685a418545c44f5d17cf9fd283e6ea5c29c1ee650494c81091ce454597e5015a (br-685a418545c4): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4"}}
	{"specversion":"1.0","id":"e5e10cc6-547c-4d2a-aa5f-9c1e275cab76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:10:32.415499     816 network_create.go:104] error while trying to create docker network insufficient-storage-20220604161018-5712 192.168.49.0/24: create docker network insufficient-storage-20220604161018-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true insufficient-storage-20220604161018-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 685a418545c44f5d17cf9fd283e6ea5c29c1ee650494c81091ce454597e5015a (br-685a418545c4): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220604161018-5712 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220604161018-5712 --output=json --layout=cluster: exit status 7 (2.8190966s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220604161018-5712","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":520,"StatusName":"Unknown"}},"Nodes":[{"Name":"insufficient-storage-20220604161018-5712","StatusCode":520,"StatusName":"Unknown","Components":{"apiserver":{"Name":"apiserver","StatusCode":520,"StatusName":"Unknown"},"kubelet":{"Name":"kubelet","StatusCode":520,"StatusName":"Unknown"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:10:39.343728    8024 status.go:258] status error: host: state: unknown state "insufficient-storage-20220604161018-5712": docker container inspect insufficient-storage-20220604161018-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: insufficient-storage-20220604161018-5712
	E0604 16:10:39.343728    8024 status.go:261] The "insufficient-storage-20220604161018-5712" host does not exist!

                                                
                                                
** /stderr **
status_test.go:98: incorrect node status code: 507
helpers_test.go:175: Cleaning up "insufficient-storage-20220604161018-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20220604161018-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20220604161018-5712: (7.866044s)
--- FAIL: TestInsufficientStorage (28.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (282.38s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2652462412.exe start -p running-upgrade-20220604161047-5712 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2652462412.exe start -p running-upgrade-20220604161047-5712 --memory=2200 --vm-driver=docker: exit status 70 (53.8600542s)

                                                
                                                
-- stdout --
	! [running-upgrade-20220604161047-5712] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig3619394563
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for running-upgrade-20220604161047-5712 container: output Error response from daemon: create running-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/running-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	* docker "running-upgrade-20220604161047-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220604161047-5712 container: output Error response from daemon: create running-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/running-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220604161047-5712", then "minikube start -p running-upgrade-20220604161047-5712 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.25.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.25.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 19.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 64.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 106.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 150.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 194.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 236.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 275.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 319.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 362.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 404.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 447.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 489.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 533.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220604161047-5712 container: output Error response from daemon: create running-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/running-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2652462412.exe start -p running-upgrade-20220604161047-5712 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2652462412.exe start -p running-upgrade-20220604161047-5712 --memory=2200 --vm-driver=docker: exit status 70 (1m47.5766242s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220604161047-5712] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig2834806194
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "running-upgrade-20220604161047-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220604161047-5712 container: output Error response from daemon: create running-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/running-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	* docker "running-upgrade-20220604161047-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220604161047-5712 container: output Error response from daemon: create running-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/running-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220604161047-5712", then "minikube start -p running-upgrade-20220604161047-5712 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220604161047-5712 container: output Error response from daemon: create running-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/running-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2652462412.exe start -p running-upgrade-20220604161047-5712 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2652462412.exe start -p running-upgrade-20220604161047-5712 --memory=2200 --vm-driver=docker: exit status 70 (1m46.188617s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220604161047-5712] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig1806487602
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "running-upgrade-20220604161047-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220604161047-5712 container: output Error response from daemon: create running-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/running-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	* docker "running-upgrade-20220604161047-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220604161047-5712 container: output Error response from daemon: create running-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/running-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220604161047-5712", then "minikube start -p running-upgrade-20220604161047-5712 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 14.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 44.27 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 65.95 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 97.53 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 141.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 185.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 227.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 271.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 315.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 348.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 395.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 433.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 464.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220604161047-5712 container: output Error response from daemon: create running-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/running-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-06-04 16:15:17.0283313 +0000 GMT m=+3322.220857801
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220604161047-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect running-upgrade-20220604161047-5712: exit status 1 (1.1205364s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: running-upgrade-20220604161047-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-20220604161047-5712 -n running-upgrade-20220604161047-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-20220604161047-5712 -n running-upgrade-20220604161047-5712: exit status 7 (2.9733866s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:15:21.099053    6088 status.go:247] status error: host: state: unknown state "running-upgrade-20220604161047-5712": docker container inspect running-upgrade-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: running-upgrade-20220604161047-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "running-upgrade-20220604161047-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "running-upgrade-20220604161047-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20220604161047-5712

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20220604161047-5712: (8.495932s)
--- FAIL: TestRunningBinaryUpgrade (282.38s)

                                                
                                    
x
+
TestKubernetesUpgrade (112.59s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220604161700-5712 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220604161700-5712 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 60 (1m17.2304167s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220604161700-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubernetes-upgrade-20220604161700-5712 in cluster kubernetes-upgrade-20220604161700-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-20220604161700-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:17:00.410433    6856 out.go:296] Setting OutFile to fd 1804 ...
	I0604 16:17:00.475479    6856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:17:00.475479    6856 out.go:309] Setting ErrFile to fd 1644...
	I0604 16:17:00.475479    6856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:17:00.489803    6856 out.go:303] Setting JSON to false
	I0604 16:17:00.492516    6856 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10492,"bootTime":1654348928,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:17:00.493052    6856 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:17:00.499348    6856 out.go:177] * [kubernetes-upgrade-20220604161700-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:17:00.502631    6856 notify.go:193] Checking for updates...
	I0604 16:17:00.506205    6856 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:17:00.503161    6856 preload.go:306] deleting older generation preload C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.download
	W0604 16:17:00.506205    6856 preload.go:309] Failed to clean up older preload files, consider running `minikube delete --all --purge`
	I0604 16:17:00.510614    6856 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:17:00.514369    6856 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:17:00.518374    6856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:17:00.521275    6856 config.go:178] Loaded profile config "cert-expiration-20220604161540-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:17:00.521275    6856 config.go:178] Loaded profile config "docker-flags-20220604161559-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:17:00.522359    6856 config.go:178] Loaded profile config "missing-upgrade-20220604161559-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0604 16:17:00.522541    6856 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:17:00.522541    6856 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:17:03.304997    6856 docker.go:137] docker version: linux-20.10.16
	I0604 16:17:03.314292    6856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:17:05.401547    6856 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0872329s)
	I0604 16:17:05.401547    6856 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:17:04.3451633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:17:05.583459    6856 out.go:177] * Using the docker driver based on user configuration
	I0604 16:17:05.589800    6856 start.go:284] selected driver: docker
	I0604 16:17:05.589887    6856 start.go:806] validating driver "docker" against <nil>
	I0604 16:17:05.590016    6856 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:17:05.665537    6856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:17:07.695480    6856 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0298883s)
	I0604 16:17:07.695509    6856 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:17:06.7179411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:17:07.695509    6856 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:17:07.696268    6856 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0604 16:17:07.699470    6856 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:17:07.701548    6856 cni.go:95] Creating CNI manager for ""
	I0604 16:17:07.701548    6856 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:17:07.701548    6856 start_flags.go:306] config:
	{Name:kubernetes-upgrade-20220604161700-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220604161700-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:17:07.703562    6856 out.go:177] * Starting control plane node kubernetes-upgrade-20220604161700-5712 in cluster kubernetes-upgrade-20220604161700-5712
	I0604 16:17:07.707337    6856 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:17:07.709340    6856 out.go:177] * Pulling base image ...
	I0604 16:17:07.713220    6856 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0604 16:17:07.713925    6856 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:17:07.713996    6856 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0604 16:17:07.713996    6856 cache.go:57] Caching tarball of preloaded images
	I0604 16:17:07.713996    6856 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:17:07.714527    6856 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0604 16:17:07.714767    6856 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220604161700-5712\config.json ...
	I0604 16:17:07.714792    6856 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220604161700-5712\config.json: {Name:mk172af4df63bb757dcbceaf81983a9e2b8ca373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:17:08.754661    6856 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:17:08.754874    6856 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:17:08.755088    6856 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:17:08.755200    6856 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:17:08.755340    6856 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:17:08.755408    6856 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:17:08.755470    6856 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:17:08.755470    6856 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:17:08.755470    6856 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:17:11.041668    6856 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:17:11.041668    6856 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:17:11.041668    6856 start.go:352] acquiring machines lock for kubernetes-upgrade-20220604161700-5712: {Name:mk4ea54ea6e4c9185cb4a5ef3ce35c86d624196a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:17:11.042210    6856 start.go:356] acquired machines lock for "kubernetes-upgrade-20220604161700-5712" in 541.7µs
	I0604 16:17:11.042409    6856 start.go:91] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220604161700-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220604161700
-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:17:11.042656    6856 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:17:11.142134    6856 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:17:11.142580    6856 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220604161700-5712" (driver="docker")
	I0604 16:17:11.143132    6856 client.go:168] LocalClient.Create starting
	I0604 16:17:11.143372    6856 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:17:11.144093    6856 main.go:134] libmachine: Decoding PEM data...
	I0604 16:17:11.144093    6856 main.go:134] libmachine: Parsing certificate...
	I0604 16:17:11.144093    6856 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:17:11.144093    6856 main.go:134] libmachine: Decoding PEM data...
	I0604 16:17:11.144093    6856 main.go:134] libmachine: Parsing certificate...
	I0604 16:17:11.154014    6856 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220604161700-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:17:12.180908    6856 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220604161700-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:17:12.180992    6856 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220604161700-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0268062s)
	I0604 16:17:12.193018    6856 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220604161700-5712] to gather additional debugging logs...
	I0604 16:17:12.193207    6856 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220604161700-5712
	W0604 16:17:13.281287    6856 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:17:13.281287    6856 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220604161700-5712: (1.0880685s)
	I0604 16:17:13.281287    6856 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220604161700-5712]: docker network inspect kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:13.281287    6856 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220604161700-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220604161700-5712
	
	** /stderr **
	I0604 16:17:13.288282    6856 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:17:14.324582    6856 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0362883s)
	I0604 16:17:14.345173    6856 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000794518] misses:0}
	I0604 16:17:14.345173    6856 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:17:14.346023    6856 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220604161700-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:17:14.353341    6856 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220604161700-5712
	W0604 16:17:15.442218    6856 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:17:15.442365    6856 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220604161700-5712: (1.0888652s)
	E0604 16:17:15.442452    6856 network_create.go:104] error while trying to create docker network kubernetes-upgrade-20220604161700-5712 192.168.49.0/24: create docker network kubernetes-upgrade-20220604161700-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3a676bac526469b5b94457f9531b4e4663cc52a638717886fd3e225f7a2b6060 (br-3a676bac5264): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:17:15.442524    6856 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220604161700-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3a676bac526469b5b94457f9531b4e4663cc52a638717886fd3e225f7a2b6060 (br-3a676bac5264): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220604161700-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3a676bac526469b5b94457f9531b4e4663cc52a638717886fd3e225f7a2b6060 (br-3a676bac5264): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:17:15.458718    6856 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:17:16.517059    6856 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0581389s)
	I0604 16:17:16.524271    6856 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:17:17.605831    6856 cli_runner.go:211] docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:17:17.605888    6856 cli_runner.go:217] Completed: docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0813838s)
	I0604 16:17:17.605888    6856 client.go:171] LocalClient.Create took 6.4626868s
	I0604 16:17:19.625517    6856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:17:19.632472    6856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712
	W0604 16:17:20.675210    6856 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:17:20.675210    6856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: (1.0427271s)
	I0604 16:17:20.675210    6856 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:20.969144    6856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712
	W0604 16:17:22.047988    6856 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:17:22.047988    6856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: (1.0788324s)
	W0604 16:17:22.047988    6856 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	
	W0604 16:17:22.047988    6856 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:22.058884    6856 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:17:22.065019    6856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712
	W0604 16:17:23.211301    6856 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:17:23.211488    6856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: (1.1462698s)
	I0604 16:17:23.211583    6856 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:23.519410    6856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712
	W0604 16:17:24.601150    6856 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:17:24.601150    6856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: (1.081728s)
	W0604 16:17:24.601150    6856 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	
	W0604 16:17:24.601150    6856 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:24.601150    6856 start.go:134] duration metric: createHost completed in 13.5583474s
	I0604 16:17:24.601603    6856 start.go:81] releasing machines lock for "kubernetes-upgrade-20220604161700-5712", held for 13.5587938s
	W0604 16:17:24.601603    6856 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220604161700-5712 container: docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220604161700-5712: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712: read-only file system
	I0604 16:17:24.615607    6856 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}
	W0604 16:17:25.680174    6856 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:17:25.680174    6856 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: (1.0645547s)
	I0604 16:17:25.680174    6856 delete.go:82] Unable to get host status for kubernetes-upgrade-20220604161700-5712, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	W0604 16:17:25.680174    6856 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220604161700-5712 container: docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220604161700-5712: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220604161700-5712 container: docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220604161700-5712: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712: read-only file system
	
	I0604 16:17:25.680174    6856 start.go:614] Will try again in 5 seconds ...
	I0604 16:17:30.689829    6856 start.go:352] acquiring machines lock for kubernetes-upgrade-20220604161700-5712: {Name:mk4ea54ea6e4c9185cb4a5ef3ce35c86d624196a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:17:30.690270    6856 start.go:356] acquired machines lock for "kubernetes-upgrade-20220604161700-5712" in 189.5µs
	I0604 16:17:30.690459    6856 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:17:30.690459    6856 fix.go:55] fixHost starting: 
	I0604 16:17:30.704765    6856 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}
	W0604 16:17:31.826989    6856 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:17:31.826989    6856 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: (1.1220565s)
	I0604 16:17:31.827093    6856 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220604161700-5712: state= err=unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:31.827301    6856 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:17:31.831551    6856 out.go:177] * docker "kubernetes-upgrade-20220604161700-5712" container is missing, will recreate.
	I0604 16:17:31.833770    6856 delete.go:124] DEMOLISHING kubernetes-upgrade-20220604161700-5712 ...
	I0604 16:17:31.847846    6856 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}
	W0604 16:17:32.892045    6856 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:17:32.892108    6856 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: (1.0440102s)
	W0604 16:17:32.892241    6856 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:32.892378    6856 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:32.907220    6856 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}
	W0604 16:17:33.951877    6856 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:17:33.952016    6856 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: (1.0445979s)
	I0604 16:17:33.952065    6856 delete.go:82] Unable to get host status for kubernetes-upgrade-20220604161700-5712, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:33.958371    6856 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220604161700-5712
	W0604 16:17:35.012550    6856 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:17:35.012599    6856 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubernetes-upgrade-20220604161700-5712: (1.053883s)
	I0604 16:17:35.012599    6856 kic.go:356] could not find the container kubernetes-upgrade-20220604161700-5712 to remove it. will try anyways
	I0604 16:17:35.020635    6856 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}
	W0604 16:17:36.038335    6856 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:17:36.038420    6856 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: (1.0175254s)
	W0604 16:17:36.038506    6856 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:36.045808    6856 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-20220604161700-5712 /bin/bash -c "sudo init 0"
	W0604 16:17:37.133338    6856 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-20220604161700-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:17:37.133522    6856 cli_runner.go:217] Completed: docker exec --privileged -t kubernetes-upgrade-20220604161700-5712 /bin/bash -c "sudo init 0": (1.0875182s)
	I0604 16:17:37.133690    6856 oci.go:625] error shutdown kubernetes-upgrade-20220604161700-5712: docker exec --privileged -t kubernetes-upgrade-20220604161700-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:38.153251    6856 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}
	W0604 16:17:39.262843    6856 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:17:39.262843    6856 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: (1.1084163s)
	I0604 16:17:39.263027    6856 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:39.263027    6856 oci.go:639] temporary error: container kubernetes-upgrade-20220604161700-5712 status is  but expect it to be exited
	I0604 16:17:39.263027    6856 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:39.736929    6856 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}
	W0604 16:17:40.817211    6856 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:17:40.817345    6856 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: (1.0802709s)
	I0604 16:17:40.817345    6856 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:40.817345    6856 oci.go:639] temporary error: container kubernetes-upgrade-20220604161700-5712 status is  but expect it to be exited
	I0604 16:17:40.817345    6856 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:41.715462    6856 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}
	W0604 16:17:42.776435    6856 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:17:42.776501    6856 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: (1.0608526s)
	I0604 16:17:42.776563    6856 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:42.776603    6856 oci.go:639] temporary error: container kubernetes-upgrade-20220604161700-5712 status is  but expect it to be exited
	I0604 16:17:42.776603    6856 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:43.421028    6856 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}
	W0604 16:17:44.485149    6856 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:17:44.485149    6856 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: (1.0641097s)
	I0604 16:17:44.485149    6856 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:44.485149    6856 oci.go:639] temporary error: container kubernetes-upgrade-20220604161700-5712 status is  but expect it to be exited
	I0604 16:17:44.485149    6856 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:45.609580    6856 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}
	W0604 16:17:46.656577    6856 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:17:46.656577    6856 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: (1.0469863s)
	I0604 16:17:46.656577    6856 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:46.656577    6856 oci.go:639] temporary error: container kubernetes-upgrade-20220604161700-5712 status is  but expect it to be exited
	I0604 16:17:46.656577    6856 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:48.182114    6856 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}
	W0604 16:17:49.233325    6856 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:17:49.233325    6856 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: (1.051199s)
	I0604 16:17:49.233325    6856 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:49.233325    6856 oci.go:639] temporary error: container kubernetes-upgrade-20220604161700-5712 status is  but expect it to be exited
	I0604 16:17:49.233325    6856 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:52.292285    6856 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}
	W0604 16:17:53.375423    6856 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:17:53.375423    6856 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: (1.0831268s)
	I0604 16:17:53.375423    6856 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:53.375423    6856 oci.go:639] temporary error: container kubernetes-upgrade-20220604161700-5712 status is  but expect it to be exited
	I0604 16:17:53.375423    6856 oci.go:88] couldn't shut down kubernetes-upgrade-20220604161700-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	 
	I0604 16:17:53.382454    6856 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-20220604161700-5712
	I0604 16:17:54.464950    6856 cli_runner.go:217] Completed: docker rm -f -v kubernetes-upgrade-20220604161700-5712: (1.0823088s)
	I0604 16:17:54.472249    6856 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220604161700-5712
	W0604 16:17:55.521472    6856 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:17:55.521524    6856 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubernetes-upgrade-20220604161700-5712: (1.0490837s)
	I0604 16:17:55.531401    6856 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220604161700-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:17:56.610507    6856 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220604161700-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:17:56.610507    6856 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220604161700-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0790945s)
	I0604 16:17:56.617570    6856 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220604161700-5712] to gather additional debugging logs...
	I0604 16:17:56.617570    6856 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220604161700-5712
	W0604 16:17:57.690210    6856 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:17:57.690210    6856 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220604161700-5712: (1.0726282s)
	I0604 16:17:57.690210    6856 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220604161700-5712]: docker network inspect kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220604161700-5712
	I0604 16:17:57.690210    6856 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220604161700-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220604161700-5712
	
	** /stderr **
	W0604 16:17:57.691194    6856 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:17:57.691194    6856 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:17:58.704852    6856 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:17:58.709372    6856 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:17:58.709611    6856 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220604161700-5712" (driver="docker")
	I0604 16:17:58.709611    6856 client.go:168] LocalClient.Create starting
	I0604 16:17:58.710164    6856 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:17:58.710293    6856 main.go:134] libmachine: Decoding PEM data...
	I0604 16:17:58.710293    6856 main.go:134] libmachine: Parsing certificate...
	I0604 16:17:58.710293    6856 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:17:58.710984    6856 main.go:134] libmachine: Decoding PEM data...
	I0604 16:17:58.710984    6856 main.go:134] libmachine: Parsing certificate...
	I0604 16:17:58.720411    6856 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220604161700-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:17:59.826434    6856 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220604161700-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:17:59.826566    6856 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220604161700-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.106012s)
	I0604 16:17:59.834040    6856 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220604161700-5712] to gather additional debugging logs...
	I0604 16:17:59.834040    6856 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220604161700-5712
	W0604 16:18:00.914032    6856 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:18:00.914149    6856 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220604161700-5712: (1.0798857s)
	I0604 16:18:00.914194    6856 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220604161700-5712]: docker network inspect kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220604161700-5712
	I0604 16:18:00.914258    6856 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220604161700-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220604161700-5712
	
	** /stderr **
	I0604 16:18:00.922225    6856 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:18:01.970059    6856 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0476291s)
	I0604 16:18:01.986336    6856 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000794518] amended:false}} dirty:map[] misses:0}
	I0604 16:18:01.986336    6856 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:18:02.001149    6856 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000794518] amended:true}} dirty:map[192.168.49.0:0xc000794518 192.168.58.0:0xc0005b0338] misses:0}
	I0604 16:18:02.001149    6856 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:18:02.001731    6856 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220604161700-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:18:02.009385    6856 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220604161700-5712
	W0604 16:18:03.030360    6856 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:18:03.030360    6856 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220604161700-5712: (1.0206582s)
	E0604 16:18:03.030360    6856 network_create.go:104] error while trying to create docker network kubernetes-upgrade-20220604161700-5712 192.168.58.0/24: create docker network kubernetes-upgrade-20220604161700-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f77bb9885091082785ac053d123293a41a27703513f7616d01b3bde39ae938a8 (br-f77bb9885091): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:18:03.030612    6856 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220604161700-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f77bb9885091082785ac053d123293a41a27703513f7616d01b3bde39ae938a8 (br-f77bb9885091): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220604161700-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f77bb9885091082785ac053d123293a41a27703513f7616d01b3bde39ae938a8 (br-f77bb9885091): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:18:03.047089    6856 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:18:04.109748    6856 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0626483s)
	I0604 16:18:04.116748    6856 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:18:05.204257    6856 cli_runner.go:211] docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:18:05.204257    6856 cli_runner.go:217] Completed: docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0872538s)
	I0604 16:18:05.204357    6856 client.go:171] LocalClient.Create took 6.4946818s
	I0604 16:18:07.218862    6856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:18:07.226447    6856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712
	W0604 16:18:08.353554    6856 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:18:08.353822    6856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: (1.1270956s)
	I0604 16:18:08.354005    6856 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:18:08.697390    6856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712
	W0604 16:18:09.874250    6856 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:18:09.874250    6856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: (1.1768487s)
	W0604 16:18:09.874250    6856 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	
	W0604 16:18:09.874250    6856 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:18:09.884238    6856 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:18:09.890243    6856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712
	W0604 16:18:10.986037    6856 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:18:10.986037    6856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: (1.0957833s)
	I0604 16:18:10.986037    6856 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:18:11.222773    6856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712
	W0604 16:18:12.344841    6856 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:18:12.344941    6856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: (1.1220194s)
	W0604 16:18:12.345154    6856 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	
	W0604 16:18:12.345154    6856 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:18:12.345214    6856 start.go:134] duration metric: createHost completed in 13.6402268s
	I0604 16:18:12.356696    6856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:18:12.364028    6856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712
	W0604 16:18:13.489038    6856 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:18:13.489087    6856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: (1.1248195s)
	I0604 16:18:13.489434    6856 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:18:13.750013    6856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712
	W0604 16:18:14.803834    6856 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:18:14.803834    6856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: (1.0538104s)
	W0604 16:18:14.803834    6856 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	
	W0604 16:18:14.803834    6856 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:18:14.813837    6856 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:18:14.820861    6856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712
	W0604 16:18:15.909742    6856 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:18:16.087139    6856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: (1.0888687s)
	I0604 16:18:16.087254    6856 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:18:16.305869    6856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712
	W0604 16:18:17.341243    6856 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712 returned with exit code 1
	I0604 16:18:17.341243    6856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: (1.0353629s)
	W0604 16:18:17.341243    6856 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	
	W0604 16:18:17.341243    6856 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	I0604 16:18:17.341243    6856 fix.go:57] fixHost completed within 46.6502949s
	I0604 16:18:17.341243    6856 start.go:81] releasing machines lock for "kubernetes-upgrade-20220604161700-5712", held for 46.6504061s
	W0604 16:18:17.341243    6856 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220604161700-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220604161700-5712 container: docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220604161700-5712: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220604161700-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220604161700-5712 container: docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220604161700-5712: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712: read-only file system
	
	I0604 16:18:17.345250    6856 out.go:177] 
	W0604 16:18:17.348241    6856 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220604161700-5712 container: docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220604161700-5712: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220604161700-5712 container: docker volume create kubernetes-upgrade-20220604161700-5712 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220604161700-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220604161700-5712: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220604161700-5712: read-only file system
	
	W0604 16:18:17.348241    6856 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:18:17.348241    6856 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:18:17.353255    6856 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220604161700-5712 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 60
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220604161700-5712
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220604161700-5712: exit status 82 (22.8219231s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-20220604161700-5712"  ...
	* Stopping node "kubernetes-upgrade-20220604161700-5712"  ...
	* Stopping node "kubernetes-upgrade-20220604161700-5712"  ...
	* Stopping node "kubernetes-upgrade-20220604161700-5712"  ...
	* Stopping node "kubernetes-upgrade-20220604161700-5712"  ...
	* Stopping node "kubernetes-upgrade-20220604161700-5712"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:18:22.838823    2984 daemonize_windows.go:38] error terminating scheduled stop for profile kubernetes-upgrade-20220604161700-5712: stopping schedule-stop service for profile kubernetes-upgrade-20220604161700-5712: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220604161700-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220604161700-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect kubernetes-upgrade-20220604161700-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:236: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220604161700-5712 failed: exit status 82
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-06-04 16:18:40.3058412 +0000 GMT m=+3525.496183301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220604161700-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect kubernetes-upgrade-20220604161700-5712: exit status 1 (1.1036339s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: kubernetes-upgrade-20220604161700-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20220604161700-5712 -n kubernetes-upgrade-20220604161700-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20220604161700-5712 -n kubernetes-upgrade-20220604161700-5712: exit status 7 (2.7735621s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:18:44.161268    7896 status.go:247] status error: host: state: unknown state "kubernetes-upgrade-20220604161700-5712": docker container inspect kubernetes-upgrade-20220604161700-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220604161700-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-20220604161700-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220604161700-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220604161700-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220604161700-5712: (8.5534041s)
--- FAIL: TestKubernetesUpgrade (112.59s)

                                                
                                    
x
+
TestMissingContainerUpgrade (206.43s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.439012167.exe start -p missing-upgrade-20220604161559-5712 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.439012167.exe start -p missing-upgrade-20220604161559-5712 --memory=2200 --driver=docker: exit status 78 (52.6308765s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220604161559-5712] minikube v1.9.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220604161559-5712
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* docker "missing-upgrade-20220604161559-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 23.02 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 96.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 132.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 167.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 201.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 224.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 240.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 264.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 310.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 348.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 383.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 416.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 472.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 495.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 520.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220604161559-5712 container: output Error response from daemon: create missing-upgrade-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220604161559-5712': mkdir /var/lib/docker/volumes/missing-upgrade-20220604161559-5712: read-only file system
	: exit status 1
	* 
	* [DOCKER_READONLY] Failed to start docker container. "minikube start -p missing-upgrade-20220604161559-5712" may fix it. recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220604161559-5712 container: output Error response from daemon: create missing-upgrade-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220604161559-5712': mkdir /var/lib/docker/volumes/missing-upgrade-20220604161559-5712: read-only file system
	: exit status 1
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.439012167.exe start -p missing-upgrade-20220604161559-5712 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.439012167.exe start -p missing-upgrade-20220604161559-5712 --memory=2200 --driver=docker: exit status 78 (1m8.5674521s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220604161559-5712] minikube v1.9.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220604161559-5712
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "missing-upgrade-20220604161559-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* docker "missing-upgrade-20220604161559-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 22.39 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 96.81 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 132.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 143.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 175.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 213.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 238.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 278.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 317.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 357.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 397.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 438.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 476.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 516.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220604161559-5712 container: output Error response from daemon: create missing-upgrade-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220604161559-5712': mkdir /var/lib/docker/volumes/missing-upgrade-20220604161559-5712: read-only file system
	: exit status 1
	* 
	* [DOCKER_READONLY] Failed to start docker container. "minikube start -p missing-upgrade-20220604161559-5712" may fix it. recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220604161559-5712 container: output Error response from daemon: create missing-upgrade-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220604161559-5712': mkdir /var/lib/docker/volumes/missing-upgrade-20220604161559-5712: read-only file system
	: exit status 1
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.439012167.exe start -p missing-upgrade-20220604161559-5712 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.439012167.exe start -p missing-upgrade-20220604161559-5712 --memory=2200 --driver=docker: exit status 78 (1m9.8294686s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220604161559-5712] minikube v1.9.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220604161559-5712
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "missing-upgrade-20220604161559-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* docker "missing-upgrade-20220604161559-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.52 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 28.98 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 63.41 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 97.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 133.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 168.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 202.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 236.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 270.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 305.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 340.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 374.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 409.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 444.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 479.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220604161559-5712 container: output Error response from daemon: create missing-upgrade-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220604161559-5712': mkdir /var/lib/docker/volumes/missing-upgrade-20220604161559-5712: read-only file system
	: exit status 1
	* 
	* [DOCKER_READONLY] Failed to start docker container. "minikube start -p missing-upgrade-20220604161559-5712" may fix it. recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220604161559-5712 container: output Error response from daemon: create missing-upgrade-20220604161559-5712: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220604161559-5712': mkdir /var/lib/docker/volumes/missing-upgrade-20220604161559-5712: read-only file system
	: exit status 1
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 78
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-06-04 16:19:13.8882519 +0000 GMT m=+3559.078231301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220604161559-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect missing-upgrade-20220604161559-5712: exit status 1 (1.1268919s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: missing-upgrade-20220604161559-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-20220604161559-5712 -n missing-upgrade-20220604161559-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-20220604161559-5712 -n missing-upgrade-20220604161559-5712: exit status 7 (2.945042s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:19:17.940539    1684 status.go:247] status error: host: state: unknown state "missing-upgrade-20220604161559-5712": docker container inspect missing-upgrade-20220604161559-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20220604161559-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-20220604161559-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "missing-upgrade-20220604161559-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20220604161559-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20220604161559-5712: (8.4482781s)
--- FAIL: TestMissingContainerUpgrade (206.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (83.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220604161047-5712 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220604161047-5712 --driver=docker: exit status 60 (1m19.0700825s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220604161047-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node NoKubernetes-20220604161047-5712 in cluster NoKubernetes-20220604161047-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220604161047-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:11:04.879107    4400 network_create.go:104] error while trying to create docker network NoKubernetes-20220604161047-5712 192.168.49.0/24: create docker network NoKubernetes-20220604161047-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0a8f463d788a9ceb4de42e2f29fe91c52b3a2b301cae745e0ec572bbb235de53 (br-0a8f463d788a): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220604161047-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0a8f463d788a9ceb4de42e2f29fe91c52b3a2b301cae745e0ec572bbb235de53 (br-0a8f463d788a): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220604161047-5712 container: docker volume create NoKubernetes-20220604161047-5712 --label name.minikube.sigs.k8s.io=NoKubernetes-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220604161047-5712': mkdir /var/lib/docker/volumes/NoKubernetes-20220604161047-5712: read-only file system
	
	E0604 16:11:52.657699    4400 network_create.go:104] error while trying to create docker network NoKubernetes-20220604161047-5712 192.168.58.0/24: create docker network NoKubernetes-20220604161047-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3fb965d48ac6a4dba5c613441057cb779d14f848a0abc2d390f929a83d8f16dc (br-3fb965d48ac6): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220604161047-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3fb965d48ac6a4dba5c613441057cb779d14f848a0abc2d390f929a83d8f16dc (br-3fb965d48ac6): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20220604161047-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220604161047-5712 container: docker volume create NoKubernetes-20220604161047-5712 --label name.minikube.sigs.k8s.io=NoKubernetes-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220604161047-5712': mkdir /var/lib/docker/volumes/NoKubernetes-20220604161047-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220604161047-5712 container: docker volume create NoKubernetes-20220604161047-5712 --label name.minikube.sigs.k8s.io=NoKubernetes-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220604161047-5712': mkdir /var/lib/docker/volumes/NoKubernetes-20220604161047-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220604161047-5712 --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220604161047-5712

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220604161047-5712: exit status 1 (1.1361271s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220604161047-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220604161047-5712 -n NoKubernetes-20220604161047-5712

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220604161047-5712 -n NoKubernetes-20220604161047-5712: exit status 7 (2.9312492s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:12:10.794261    7152 status.go:247] status error: host: state: unknown state "NoKubernetes-20220604161047-5712": docker container inspect NoKubernetes-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220604161047-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220604161047-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (83.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (295.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2054102463.exe start -p stopped-upgrade-20220604161047-5712 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2054102463.exe start -p stopped-upgrade-20220604161047-5712 --memory=2200 --vm-driver=docker: exit status 70 (1m21.3393264s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220604161047-5712] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig412160685
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220604161047-5712 container: output Error response from daemon: create stopped-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/stopped-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	* docker "stopped-upgrade-20220604161047-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220604161047-5712 container: output Error response from daemon: create stopped-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/stopped-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220604161047-5712", then "minikube start -p stopped-upgrade-20220604161047-5712 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 7.12 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 18.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 53.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 81.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 114.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 146.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 177.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 211.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 239.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 271.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 303.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 369.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 403.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 436.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 463.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 496.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 529.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220604161047-5712 container: output Error response from daemon: create stopped-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/stopped-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2054102463.exe start -p stopped-upgrade-20220604161047-5712 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2054102463.exe start -p stopped-upgrade-20220604161047-5712 --memory=2200 --vm-driver=docker: exit status 70 (1m45.8625584s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220604161047-5712] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig2567424489
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "stopped-upgrade-20220604161047-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220604161047-5712 container: output Error response from daemon: create stopped-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/stopped-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	* docker "stopped-upgrade-20220604161047-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220604161047-5712 container: output Error response from daemon: create stopped-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/stopped-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220604161047-5712", then "minikube start -p stopped-upgrade-20220604161047-5712 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 21.92 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 107.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 152.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 197.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 241.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 286.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 331.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 376.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 421.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 465.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 511.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220604161047-5712 container: output Error response from daemon: create stopped-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/stopped-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2054102463.exe start -p stopped-upgrade-20220604161047-5712 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2054102463.exe start -p stopped-upgrade-20220604161047-5712 --memory=2200 --vm-driver=docker: exit status 70 (1m45.1646985s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220604161047-5712] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig1089938498
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "stopped-upgrade-20220604161047-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220604161047-5712 container: output Error response from daemon: create stopped-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/stopped-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	* docker "stopped-upgrade-20220604161047-5712" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220604161047-5712 container: output Error response from daemon: create stopped-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/stopped-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220604161047-5712", then "minikube start -p stopped-upgrade-20220604161047-5712 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.47 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 49.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 64.39 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 106.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 145.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 175.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 198.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 233.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 272.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 303.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 343.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 380.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 423.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 457.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220604161047-5712 container: output Error response from daemon: create stopped-upgrade-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220604161047-5712': mkdir /var/lib/docker/volumes/stopped-upgrade-20220604161047-5712: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (295.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (119.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220604161047-5712 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220604161047-5712 --no-kubernetes --driver=docker: exit status 60 (1m55.3102158s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220604161047-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes NoKubernetes-20220604161047-5712 in cluster NoKubernetes-20220604161047-5712
	* Pulling base image ...
	* docker "NoKubernetes-20220604161047-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220604161047-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:12:58.123553    4240 network_create.go:104] error while trying to create docker network NoKubernetes-20220604161047-5712 192.168.49.0/24: create docker network NoKubernetes-20220604161047-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dd508efad0347fae93f442385639e92f1cd668109573f55871e6cad0f6ee862e (br-dd508efad034): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220604161047-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dd508efad0347fae93f442385639e92f1cd668109573f55871e6cad0f6ee862e (br-dd508efad034): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220604161047-5712 container: docker volume create NoKubernetes-20220604161047-5712 --label name.minikube.sigs.k8s.io=NoKubernetes-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220604161047-5712': mkdir /var/lib/docker/volumes/NoKubernetes-20220604161047-5712: read-only file system
	
	E0604 16:13:51.709912    4240 network_create.go:104] error while trying to create docker network NoKubernetes-20220604161047-5712 192.168.58.0/24: create docker network NoKubernetes-20220604161047-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b52d60170b37c4acae0d13913b465aba7d82ab40821877aeab7b4a1da96a843d (br-b52d60170b37): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220604161047-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b52d60170b37c4acae0d13913b465aba7d82ab40821877aeab7b4a1da96a843d (br-b52d60170b37): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20220604161047-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220604161047-5712 container: docker volume create NoKubernetes-20220604161047-5712 --label name.minikube.sigs.k8s.io=NoKubernetes-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220604161047-5712': mkdir /var/lib/docker/volumes/NoKubernetes-20220604161047-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220604161047-5712 container: docker volume create NoKubernetes-20220604161047-5712 --label name.minikube.sigs.k8s.io=NoKubernetes-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220604161047-5712': mkdir /var/lib/docker/volumes/NoKubernetes-20220604161047-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220604161047-5712 --no-kubernetes --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220604161047-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220604161047-5712: exit status 1 (1.1493467s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220604161047-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220604161047-5712 -n NoKubernetes-20220604161047-5712

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220604161047-5712 -n NoKubernetes-20220604161047-5712: exit status 7 (2.9777852s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:14:10.227502    7836 status.go:247] status error: host: state: unknown state "NoKubernetes-20220604161047-5712": docker container inspect NoKubernetes-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220604161047-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220604161047-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (119.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (101.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220604161047-5712 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220604161047-5712 --no-kubernetes --driver=docker: exit status 1 (1m37.0242081s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220604161047-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes NoKubernetes-20220604161047-5712 in cluster NoKubernetes-20220604161047-5712
	* Pulling base image ...
	* docker "NoKubernetes-20220604161047-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220604161047-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:14:56.822488    1536 network_create.go:104] error while trying to create docker network NoKubernetes-20220604161047-5712 192.168.49.0/24: create docker network NoKubernetes-20220604161047-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d08522b406e436be590f0041415719f8fd0c77d4c3f4dab1a285780d11344ef1 (br-d08522b406e4): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220604161047-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220604161047-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d08522b406e436be590f0041415719f8fd0c77d4c3f4dab1a285780d11344ef1 (br-d08522b406e4): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220604161047-5712 container: docker volume create NoKubernetes-20220604161047-5712 --label name.minikube.sigs.k8s.io=NoKubernetes-20220604161047-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220604161047-5712: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220604161047-5712': mkdir /var/lib/docker/volumes/NoKubernetes-20220604161047-5712: read-only file system
	

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220604161047-5712 --no-kubernetes --driver=docker" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220604161047-5712

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220604161047-5712: exit status 1 (1.1455988s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220604161047-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220604161047-5712 -n NoKubernetes-20220604161047-5712

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220604161047-5712 -n NoKubernetes-20220604161047-5712: exit status 7 (2.9408048s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:15:51.369131    6720 status.go:247] status error: host: state: unknown state "NoKubernetes-20220604161047-5712": docker container inspect NoKubernetes-20220604161047-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220604161047-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220604161047-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Start (101.12s)

                                                
                                    
x
+
TestPause/serial/Start (81.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220604161529-5712 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-20220604161529-5712 --memory=2048 --install-addons=false --wait=all --driver=docker: exit status 60 (1m17.5996758s)

                                                
                                                
-- stdout --
	* [pause-20220604161529-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node pause-20220604161529-5712 in cluster pause-20220604161529-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "pause-20220604161529-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:15:44.822879    5488 network_create.go:104] error while trying to create docker network pause-20220604161529-5712 192.168.49.0/24: create docker network pause-20220604161529-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220604161529-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 53e921487a73281ff1be83eee4cafe0eac0f5db2e69142dc590efa83d8841a78 (br-53e921487a73): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network pause-20220604161529-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220604161529-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 53e921487a73281ff1be83eee4cafe0eac0f5db2e69142dc590efa83d8841a78 (br-53e921487a73): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for pause-20220604161529-5712 container: docker volume create pause-20220604161529-5712 --label name.minikube.sigs.k8s.io=pause-20220604161529-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220604161529-5712: error while creating volume root path '/var/lib/docker/volumes/pause-20220604161529-5712': mkdir /var/lib/docker/volumes/pause-20220604161529-5712: read-only file system
	
	E0604 16:16:33.160343    5488 network_create.go:104] error while trying to create docker network pause-20220604161529-5712 192.168.58.0/24: create docker network pause-20220604161529-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220604161529-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e112dc95efc0a628f30385ed9254ed49745892d5c2cc1245a6cc75cc110de796 (br-e112dc95efc0): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network pause-20220604161529-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220604161529-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e112dc95efc0a628f30385ed9254ed49745892d5c2cc1245a6cc75cc110de796 (br-e112dc95efc0): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p pause-20220604161529-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for pause-20220604161529-5712 container: docker volume create pause-20220604161529-5712 --label name.minikube.sigs.k8s.io=pause-20220604161529-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220604161529-5712: error while creating volume root path '/var/lib/docker/volumes/pause-20220604161529-5712': mkdir /var/lib/docker/volumes/pause-20220604161529-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for pause-20220604161529-5712 container: docker volume create pause-20220604161529-5712 --label name.minikube.sigs.k8s.io=pause-20220604161529-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220604161529-5712: error while creating volume root path '/var/lib/docker/volumes/pause-20220604161529-5712': mkdir /var/lib/docker/volumes/pause-20220604161529-5712: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p pause-20220604161529-5712 --memory=2048 --install-addons=false --wait=all --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220604161529-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20220604161529-5712: exit status 1 (1.0721205s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: pause-20220604161529-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220604161529-5712 -n pause-20220604161529-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220604161529-5712 -n pause-20220604161529-5712: exit status 7 (3.0065505s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:16:51.285347    6720 status.go:247] status error: host: state: unknown state "pause-20220604161529-5712": docker container inspect pause-20220604161529-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220604161529-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20220604161529-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/Start (81.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220604161047-5712
version_upgrade_test.go:213: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220604161047-5712: exit status 80 (3.403837s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                  Args                                  |                 Profile                  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| delete  | -p nospam-20220604152324-5712                                          | nospam-20220604152324-5712               | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:26 GMT | 04 Jun 22 15:26 GMT |
	| cache   | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	|         | cache add k8s.gcr.io/pause:3.1                                         |                                          |                   |                |                     |                     |
	| cache   | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	|         | cache add k8s.gcr.io/pause:3.3                                         |                                          |                   |                |                     |                     |
	| cache   | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	|         | cache add                                                              |                                          |                   |                |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                |                                          |                   |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.3                                            | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	| cache   | list                                                                   | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	| cache   | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	|         | cache reload                                                           |                                          |                   |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.1                                            | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	| cache   | delete k8s.gcr.io/pause:latest                                         | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:30 GMT | 04 Jun 22 15:30 GMT |
	| config  | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:32 GMT | 04 Jun 22 15:32 GMT |
	|         | config unset cpus                                                      |                                          |                   |                |                     |                     |
	| config  | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:32 GMT | 04 Jun 22 15:32 GMT |
	|         | config set cpus 2                                                      |                                          |                   |                |                     |                     |
	| config  | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:32 GMT | 04 Jun 22 15:32 GMT |
	|         | config get cpus                                                        |                                          |                   |                |                     |                     |
	| config  | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:32 GMT | 04 Jun 22 15:32 GMT |
	|         | config unset cpus                                                      |                                          |                   |                |                     |                     |
	| addons  | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:32 GMT | 04 Jun 22 15:32 GMT |
	|         | addons list                                                            |                                          |                   |                |                     |                     |
	| addons  | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:32 GMT | 04 Jun 22 15:32 GMT |
	|         | addons list -o json                                                    |                                          |                   |                |                     |                     |
	| profile | list --output json                                                     | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:32 GMT | 04 Jun 22 15:33 GMT |
	| profile | list                                                                   | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	| profile | list -l                                                                | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	| profile | list -o json                                                           | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	| profile | list -o json --light                                                   | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	| image   | functional-20220604152644-5712 image load --daemon                     | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220604152644-5712  |                                          |                   |                |                     |                     |
	| image   | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | image ls                                                               |                                          |                   |                |                     |                     |
	| image   | functional-20220604152644-5712 image load --daemon                     | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220604152644-5712  |                                          |                   |                |                     |                     |
	| image   | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | image ls                                                               |                                          |                   |                |                     |                     |
	| image   | functional-20220604152644-5712 image save                              | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220604152644-5712  |                                          |                   |                |                     |                     |
	|         | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar |                                          |                   |                |                     |                     |
	| image   | functional-20220604152644-5712 image rm                                | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220604152644-5712  |                                          |                   |                |                     |                     |
	| image   | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | image ls                                                               |                                          |                   |                |                     |                     |
	| image   | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | image ls --format short                                                |                                          |                   |                |                     |                     |
	| image   | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | image ls --format yaml                                                 |                                          |                   |                |                     |                     |
	| image   | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | image ls --format json                                                 |                                          |                   |                |                     |                     |
	| image   | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | image ls --format table                                                |                                          |                   |                |                     |                     |
	| image   | functional-20220604152644-5712 image build -t                          | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | localhost/my-image:functional-20220604152644-5712                      |                                          |                   |                |                     |                     |
	|         | testdata\build                                                         |                                          |                   |                |                     |                     |
	| image   | functional-20220604152644-5712                                         | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:33 GMT | 04 Jun 22 15:33 GMT |
	|         | image ls                                                               |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | functional-20220604152644-5712           | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:38 GMT | 04 Jun 22 15:38 GMT |
	|         | functional-20220604152644-5712                                         |                                          |                   |                |                     |                     |
	| addons  | ingress-addon-legacy-20220604153841-5712                               | ingress-addon-legacy-20220604153841-5712 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:40 GMT | 04 Jun 22 15:40 GMT |
	|         | addons enable ingress-dns                                              |                                          |                   |                |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | ingress-addon-legacy-20220604153841-5712 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:40 GMT | 04 Jun 22 15:40 GMT |
	|         | ingress-addon-legacy-20220604153841-5712                               |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | json-output-20220604154019-5712          | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:42 GMT | 04 Jun 22 15:42 GMT |
	|         | json-output-20220604154019-5712                                        |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | json-output-error-20220604154209-5712    | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:42 GMT | 04 Jun 22 15:42 GMT |
	|         | json-output-error-20220604154209-5712                                  |                                          |                   |                |                     |                     |
	| start   | -p                                                                     | docker-network-20220604154217-5712       | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:42 GMT | 04 Jun 22 15:45 GMT |
	|         | docker-network-20220604154217-5712                                     |                                          |                   |                |                     |                     |
	|         | --network=                                                             |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | docker-network-20220604154217-5712       | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:45 GMT | 04 Jun 22 15:46 GMT |
	|         | docker-network-20220604154217-5712                                     |                                          |                   |                |                     |                     |
	| start   | -p                                                                     | docker-network-20220604154623-5712       | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:46 GMT | 04 Jun 22 15:49 GMT |
	|         | docker-network-20220604154623-5712                                     |                                          |                   |                |                     |                     |
	|         | --network=bridge                                                       |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | docker-network-20220604154623-5712       | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:49 GMT | 04 Jun 22 15:50 GMT |
	|         | docker-network-20220604154623-5712                                     |                                          |                   |                |                     |                     |
	| start   | -p                                                                     | custom-subnet-20220604155016-5712        | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:50 GMT | 04 Jun 22 15:53 GMT |
	|         | custom-subnet-20220604155016-5712                                      |                                          |                   |                |                     |                     |
	|         | --subnet=192.168.60.0/24                                               |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | custom-subnet-20220604155016-5712        | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:53 GMT | 04 Jun 22 15:54 GMT |
	|         | custom-subnet-20220604155016-5712                                      |                                          |                   |                |                     |                     |
	| delete  | -p second-20220604155412-5712                                          | second-20220604155412-5712               | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:55 GMT | 04 Jun 22 15:55 GMT |
	| delete  | -p first-20220604155412-5712                                           | first-20220604155412-5712                | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:55 GMT | 04 Jun 22 15:55 GMT |
	| delete  | -p                                                                     | mount-start-2-20220604155547-5712        | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:57 GMT | 04 Jun 22 15:57 GMT |
	|         | mount-start-2-20220604155547-5712                                      |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | mount-start-1-20220604155547-5712        | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:57 GMT | 04 Jun 22 15:57 GMT |
	|         | mount-start-1-20220604155547-5712                                      |                                          |                   |                |                     |                     |
	| profile | list --output json                                                     | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 15:59 GMT | 04 Jun 22 15:59 GMT |
	| delete  | -p                                                                     | multinode-20220604155719-5712-m02        | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 16:07 GMT | 04 Jun 22 16:07 GMT |
	|         | multinode-20220604155719-5712-m02                                      |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | multinode-20220604155719-5712            | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 16:07 GMT | 04 Jun 22 16:07 GMT |
	|         | multinode-20220604155719-5712                                          |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | test-preload-20220604160727-5712         | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 16:08 GMT | 04 Jun 22 16:08 GMT |
	|         | test-preload-20220604160727-5712                                       |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | scheduled-stop-20220604160853-5712       | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 16:10 GMT | 04 Jun 22 16:10 GMT |
	|         | scheduled-stop-20220604160853-5712                                     |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | insufficient-storage-20220604161018-5712 | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 16:10 GMT | 04 Jun 22 16:10 GMT |
	|         | insufficient-storage-20220604161018-5712                               |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | offline-docker-20220604161047-5712       | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 16:12 GMT | 04 Jun 22 16:12 GMT |
	|         | offline-docker-20220604161047-5712                                     |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | force-systemd-flag-20220604161219-5712   | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 16:13 GMT | 04 Jun 22 16:13 GMT |
	|         | force-systemd-flag-20220604161219-5712                                 |                                          |                   |                |                     |                     |
	| delete  | -p flannel-20220604161352-5712                                         | flannel-20220604161352-5712              | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 16:13 GMT | 04 Jun 22 16:14 GMT |
	| delete  | -p                                                                     | custom-flannel-20220604161400-5712       | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 16:14 GMT | 04 Jun 22 16:14 GMT |
	|         | custom-flannel-20220604161400-5712                                     |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | running-upgrade-20220604161047-5712      | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 16:15 GMT | 04 Jun 22 16:15 GMT |
	|         | running-upgrade-20220604161047-5712                                    |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | force-systemd-env-20220604161407-5712    | minikube2\jenkins | v1.26.0-beta.1 | 04 Jun 22 16:15 GMT | 04 Jun 22 16:15 GMT |
	|         | force-systemd-env-20220604161407-5712                                  |                                          |                   |                |                     |                     |
	|---------|------------------------------------------------------------------------|------------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/04 16:15:41
	Running on machine: minikube2
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 16:15:41.070072    3732 out.go:296] Setting OutFile to fd 1784 ...
	I0604 16:15:41.125868    3732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:15:41.125868    3732 out.go:309] Setting ErrFile to fd 1788...
	I0604 16:15:41.125868    3732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:15:41.136391    3732 out.go:303] Setting JSON to false
	I0604 16:15:41.139406    3732 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10413,"bootTime":1654348928,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:15:41.139406    3732 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:15:41.142389    3732 out.go:177] * [cert-expiration-20220604161540-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:15:41.147692    3732 notify.go:193] Checking for updates...
	I0604 16:15:41.150424    3732 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:15:41.152786    3732 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:15:41.154981    3732 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:15:41.156380    3732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "stopped-upgrade-20220604161047-5712": docker container inspect stopped-upgrade-20220604161047-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: stopped-upgrade-20220604161047-5712
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_754.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:215: `minikube logs` after upgrade to HEAD from v1.9.0 failed: exit status 80
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (3.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (81.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220604161852-5712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20220604161852-5712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: exit status 60 (1m16.8913291s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220604161852-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node old-k8s-version-20220604161852-5712 in cluster old-k8s-version-20220604161852-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20220604161852-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:18:52.989144    8124 out.go:296] Setting OutFile to fd 1804 ...
	I0604 16:18:53.046208    8124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:18:53.046208    8124 out.go:309] Setting ErrFile to fd 1580...
	I0604 16:18:53.046296    8124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:18:53.057853    8124 out.go:303] Setting JSON to false
	I0604 16:18:53.060487    8124 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10605,"bootTime":1654348928,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:18:53.060487    8124 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:18:53.063836    8124 out.go:177] * [old-k8s-version-20220604161852-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:18:53.065909    8124 notify.go:193] Checking for updates...
	I0604 16:18:53.069269    8124 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:18:53.071825    8124 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:18:53.074206    8124 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:18:53.076489    8124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:18:53.079414    8124 config.go:178] Loaded profile config "cert-expiration-20220604161540-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:18:53.079953    8124 config.go:178] Loaded profile config "cert-options-20220604161736-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:18:53.080363    8124 config.go:178] Loaded profile config "missing-upgrade-20220604161559-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0604 16:18:53.080735    8124 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:18:53.080942    8124 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:18:55.758857    8124 docker.go:137] docker version: linux-20.10.16
	I0604 16:18:55.767064    8124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:18:57.856649    8124 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0895624s)
	I0604 16:18:57.856649    8124 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:18:56.8430258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:18:57.860886    8124 out.go:177] * Using the docker driver based on user configuration
	I0604 16:18:57.863058    8124 start.go:284] selected driver: docker
	I0604 16:18:57.863058    8124 start.go:806] validating driver "docker" against <nil>
	I0604 16:18:57.863058    8124 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:18:57.945766    8124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:18:59.954260    8124 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0082477s)
	I0604 16:18:59.954626    8124 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:18:59.0149897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:18:59.954851    8124 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:18:59.955513    8124 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:18:59.958324    8124 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:18:59.960348    8124 cni.go:95] Creating CNI manager for ""
	I0604 16:18:59.960348    8124 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:18:59.960348    8124 start_flags.go:306] config:
	{Name:old-k8s-version-20220604161852-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220604161852-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:18:59.964516    8124 out.go:177] * Starting control plane node old-k8s-version-20220604161852-5712 in cluster old-k8s-version-20220604161852-5712
	I0604 16:18:59.966286    8124 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:18:59.968576    8124 out.go:177] * Pulling base image ...
	I0604 16:18:59.971588    8124 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0604 16:18:59.971711    8124 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:18:59.971749    8124 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0604 16:18:59.971832    8124 cache.go:57] Caching tarball of preloaded images
	I0604 16:18:59.972428    8124 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:18:59.972526    8124 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0604 16:18:59.972839    8124 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20220604161852-5712\config.json ...
	I0604 16:18:59.973125    8124 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20220604161852-5712\config.json: {Name:mk48699432bed16f676e4e1b5470cfd58684d290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:19:01.117852    8124 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:19:01.117852    8124 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:19:01.117852    8124 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:19:01.117852    8124 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:19:01.118398    8124 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:19:01.118398    8124 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:19:01.118590    8124 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:19:01.118590    8124 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:19:01.118662    8124 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:19:03.437552    8124 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:19:03.437625    8124 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:19:03.437813    8124 start.go:352] acquiring machines lock for old-k8s-version-20220604161852-5712: {Name:mk657bf990f7a9200ffd5262e5ca8011c3561921 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:03.438038    8124 start.go:356] acquired machines lock for "old-k8s-version-20220604161852-5712" in 198µs
	I0604 16:19:03.438252    8124 start.go:91] Provisioning new machine with config: &{Name:old-k8s-version-20220604161852-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220604161852-5712 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:19:03.438418    8124 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:19:03.442233    8124 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:19:03.442233    8124 start.go:165] libmachine.API.Create for "old-k8s-version-20220604161852-5712" (driver="docker")
	I0604 16:19:03.442233    8124 client.go:168] LocalClient.Create starting
	I0604 16:19:03.442814    8124 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:19:03.443337    8124 main.go:134] libmachine: Decoding PEM data...
	I0604 16:19:03.443337    8124 main.go:134] libmachine: Parsing certificate...
	I0604 16:19:03.443543    8124 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:19:03.443543    8124 main.go:134] libmachine: Decoding PEM data...
	I0604 16:19:03.443543    8124 main.go:134] libmachine: Parsing certificate...
	I0604 16:19:03.453383    8124 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:19:04.564231    8124 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:19:04.564278    8124 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.110669s)
	I0604 16:19:04.572113    8124 network_create.go:272] running [docker network inspect old-k8s-version-20220604161852-5712] to gather additional debugging logs...
	I0604 16:19:04.572113    8124 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712
	W0604 16:19:05.678273    8124 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:19:05.678318    8124 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712: (1.1059888s)
	I0604 16:19:05.678318    8124 network_create.go:275] error running [docker network inspect old-k8s-version-20220604161852-5712]: docker network inspect old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220604161852-5712
	I0604 16:19:05.678398    8124 network_create.go:277] output of [docker network inspect old-k8s-version-20220604161852-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220604161852-5712
	
	** /stderr **
	I0604 16:19:05.686674    8124 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:19:06.765331    8124 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0786455s)
	I0604 16:19:06.784825    8124 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00083e258] misses:0}
	I0604 16:19:06.784825    8124 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:19:06.784825    8124 network_create.go:115] attempt to create docker network old-k8s-version-20220604161852-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:19:06.791885    8124 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712
	W0604 16:19:07.840913    8124 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:19:07.840913    8124 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: (1.0487844s)
	E0604 16:19:07.840913    8124 network_create.go:104] error while trying to create docker network old-k8s-version-20220604161852-5712 192.168.49.0/24: create docker network old-k8s-version-20220604161852-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 62a09a36482f96dd50d38b5cf170670b24047f78f9314894629b5609db9a4cbc (br-62a09a36482f): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:19:07.841291    8124 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220604161852-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 62a09a36482f96dd50d38b5cf170670b24047f78f9314894629b5609db9a4cbc (br-62a09a36482f): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220604161852-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 62a09a36482f96dd50d38b5cf170670b24047f78f9314894629b5609db9a4cbc (br-62a09a36482f): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:19:07.855310    8124 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:19:08.941059    8124 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0856185s)
	I0604 16:19:08.949056    8124 cli_runner.go:164] Run: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:19:10.035382    8124 cli_runner.go:211] docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:19:10.035612    8124 cli_runner.go:217] Completed: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0863136s)
	I0604 16:19:10.035612    8124 client.go:171] LocalClient.Create took 6.5927775s
	I0604 16:19:12.048997    8124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:19:12.055916    8124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:19:13.157921    8124 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:19:13.157921    8124 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.1019935s)
	I0604 16:19:13.157921    8124 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:13.448625    8124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:19:14.587241    8124 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:19:14.587241    8124 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.1386035s)
	W0604 16:19:14.587241    8124 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:19:14.587241    8124 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:14.596203    8124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:19:14.603332    8124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:19:15.704979    8124 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:19:15.704979    8124 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.1016353s)
	I0604 16:19:15.704979    8124 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:16.016370    8124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:19:17.156668    8124 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:19:17.156668    8124 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.1402856s)
	W0604 16:19:17.156668    8124 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:19:17.156668    8124 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:17.156668    8124 start.go:134] duration metric: createHost completed in 13.718102s
	I0604 16:19:17.156668    8124 start.go:81] releasing machines lock for "old-k8s-version-20220604161852-5712", held for 13.7184819s
	W0604 16:19:17.156668    8124 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	I0604 16:19:17.170991    8124 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:19:18.300326    8124 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:18.300677    8124 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.1291787s)
	I0604 16:19:18.300780    8124 delete.go:82] Unable to get host status for old-k8s-version-20220604161852-5712, assuming it has already been deleted: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	W0604 16:19:18.301122    8124 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	
	I0604 16:19:18.301122    8124 start.go:614] Will try again in 5 seconds ...
	I0604 16:19:23.310147    8124 start.go:352] acquiring machines lock for old-k8s-version-20220604161852-5712: {Name:mk657bf990f7a9200ffd5262e5ca8011c3561921 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:23.310439    8124 start.go:356] acquired machines lock for "old-k8s-version-20220604161852-5712" in 245.3µs
	I0604 16:19:23.310643    8124 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:19:23.310643    8124 fix.go:55] fixHost starting: 
	I0604 16:19:23.325502    8124 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:19:24.440549    8124 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:24.440720    8124 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.115035s)
	I0604 16:19:24.440832    8124 fix.go:103] recreateIfNeeded on old-k8s-version-20220604161852-5712: state= err=unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:24.440888    8124 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:19:24.444979    8124 out.go:177] * docker "old-k8s-version-20220604161852-5712" container is missing, will recreate.
	I0604 16:19:24.447119    8124 delete.go:124] DEMOLISHING old-k8s-version-20220604161852-5712 ...
	I0604 16:19:24.462851    8124 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:19:25.521932    8124 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:25.522119    8124 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0589204s)
	W0604 16:19:25.522205    8124 stop.go:75] unable to get state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:25.522284    8124 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:25.536788    8124 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:19:26.604726    8124 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:26.604726    8124 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0679272s)
	I0604 16:19:26.604726    8124 delete.go:82] Unable to get host status for old-k8s-version-20220604161852-5712, assuming it has already been deleted: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:26.614050    8124 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712
	W0604 16:19:27.739152    8124 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:19:27.739152    8124 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712: (1.1250893s)
	I0604 16:19:27.739152    8124 kic.go:356] could not find the container old-k8s-version-20220604161852-5712 to remove it. will try anyways
	I0604 16:19:27.747110    8124 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:19:28.858894    8124 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:28.858952    8124 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.1116319s)
	W0604 16:19:28.858952    8124 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:28.866390    8124 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-20220604161852-5712 /bin/bash -c "sudo init 0"
	W0604 16:19:29.955002    8124 cli_runner.go:211] docker exec --privileged -t old-k8s-version-20220604161852-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:19:29.955002    8124 cli_runner.go:217] Completed: docker exec --privileged -t old-k8s-version-20220604161852-5712 /bin/bash -c "sudo init 0": (1.0886001s)
	I0604 16:19:29.955002    8124 oci.go:625] error shutdown old-k8s-version-20220604161852-5712: docker exec --privileged -t old-k8s-version-20220604161852-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:30.973887    8124 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:19:32.078366    8124 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:32.078366    8124 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.1044668s)
	I0604 16:19:32.078366    8124 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:32.078366    8124 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:19:32.078366    8124 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:32.565308    8124 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:19:33.623161    8124 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:33.623161    8124 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0578418s)
	I0604 16:19:33.623161    8124 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:33.623161    8124 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:19:33.623161    8124 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:34.536912    8124 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:19:35.578895    8124 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:35.578895    8124 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.041972s)
	I0604 16:19:35.578895    8124 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:35.578895    8124 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:19:35.578895    8124 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:36.229351    8124 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:19:37.292650    8124 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:37.292650    8124 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0632871s)
	I0604 16:19:37.292650    8124 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:37.292650    8124 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:19:37.292650    8124 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:38.421958    8124 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:19:39.469517    8124 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:39.469669    8124 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0475482s)
	I0604 16:19:39.469669    8124 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:39.469669    8124 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:19:39.469669    8124 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:40.995998    8124 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:19:42.050507    8124 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:42.050507    8124 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0544983s)
	I0604 16:19:42.050507    8124 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:42.051468    8124 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:19:42.051468    8124 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:45.114581    8124 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:19:46.155860    8124 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:46.155860    8124 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0404242s)
	I0604 16:19:46.156892    8124 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:19:46.156892    8124 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:19:46.156892    8124 oci.go:88] couldn't shut down old-k8s-version-20220604161852-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	 
	I0604 16:19:46.166827    8124 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-20220604161852-5712
	I0604 16:19:47.241242    8124 cli_runner.go:217] Completed: docker rm -f -v old-k8s-version-20220604161852-5712: (1.0744027s)
	I0604 16:19:47.247252    8124 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712
	W0604 16:19:48.337846    8124 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:19:48.337846    8124 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712: (1.0905819s)
	I0604 16:19:48.344863    8124 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:19:49.399059    8124 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:19:49.399059    8124 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0541854s)
	I0604 16:19:49.406054    8124 network_create.go:272] running [docker network inspect old-k8s-version-20220604161852-5712] to gather additional debugging logs...
	I0604 16:19:49.406054    8124 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712
	W0604 16:19:50.433578    8124 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:19:50.433612    8124 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712: (1.027476s)
	I0604 16:19:50.433612    8124 network_create.go:275] error running [docker network inspect old-k8s-version-20220604161852-5712]: docker network inspect old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220604161852-5712
	I0604 16:19:50.433612    8124 network_create.go:277] output of [docker network inspect old-k8s-version-20220604161852-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220604161852-5712
	
	** /stderr **
	W0604 16:19:50.434747    8124 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:19:50.434848    8124 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:19:51.437870    8124 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:19:51.442545    8124 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:19:51.442545    8124 start.go:165] libmachine.API.Create for "old-k8s-version-20220604161852-5712" (driver="docker")
	I0604 16:19:51.442545    8124 client.go:168] LocalClient.Create starting
	I0604 16:19:51.442545    8124 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:19:51.442545    8124 main.go:134] libmachine: Decoding PEM data...
	I0604 16:19:51.442545    8124 main.go:134] libmachine: Parsing certificate...
	I0604 16:19:51.442545    8124 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:19:51.442545    8124 main.go:134] libmachine: Decoding PEM data...
	I0604 16:19:51.442545    8124 main.go:134] libmachine: Parsing certificate...
	I0604 16:19:51.456594    8124 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:19:52.472277    8124 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:19:52.472277    8124 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0156718s)
	I0604 16:19:52.479274    8124 network_create.go:272] running [docker network inspect old-k8s-version-20220604161852-5712] to gather additional debugging logs...
	I0604 16:19:52.479274    8124 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712
	W0604 16:19:53.540812    8124 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:19:53.540812    8124 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712: (1.061526s)
	I0604 16:19:53.540812    8124 network_create.go:275] error running [docker network inspect old-k8s-version-20220604161852-5712]: docker network inspect old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220604161852-5712
	I0604 16:19:53.540812    8124 network_create.go:277] output of [docker network inspect old-k8s-version-20220604161852-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220604161852-5712
	
	** /stderr **
	I0604 16:19:53.549758    8124 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:19:54.566780    8124 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0170114s)
	I0604 16:19:54.582782    8124 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083e258] amended:false}} dirty:map[] misses:0}
	I0604 16:19:54.582782    8124 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:19:54.594776    8124 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00083e258] amended:true}} dirty:map[192.168.49.0:0xc00083e258 192.168.58.0:0xc0005c2578] misses:0}
	I0604 16:19:54.594776    8124 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:19:54.594776    8124 network_create.go:115] attempt to create docker network old-k8s-version-20220604161852-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:19:54.606535    8124 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712
	W0604 16:19:55.652814    8124 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:19:55.652814    8124 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: (1.0462682s)
	E0604 16:19:55.652814    8124 network_create.go:104] error while trying to create docker network old-k8s-version-20220604161852-5712 192.168.58.0/24: create docker network old-k8s-version-20220604161852-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3e756d7bb0da52ac5ea148120f7df2fedc7e18e8279dffdcb0fcbade26a556b1 (br-3e756d7bb0da): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:19:55.652814    8124 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220604161852-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3e756d7bb0da52ac5ea148120f7df2fedc7e18e8279dffdcb0fcbade26a556b1 (br-3e756d7bb0da): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220604161852-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3e756d7bb0da52ac5ea148120f7df2fedc7e18e8279dffdcb0fcbade26a556b1 (br-3e756d7bb0da): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:19:55.669815    8124 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:19:56.752275    8124 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0824485s)
	I0604 16:19:56.758287    8124 cli_runner.go:164] Run: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:19:57.836568    8124 cli_runner.go:211] docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:19:57.836568    8124 cli_runner.go:217] Completed: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0772717s)
	I0604 16:19:57.836568    8124 client.go:171] LocalClient.Create took 6.3939536s
	I0604 16:19:59.853510    8124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:19:59.859229    8124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:20:00.924207    8124 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:20:00.924432    8124 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0648117s)
	I0604 16:20:00.924660    8124 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:01.267088    8124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:20:02.301410    8124 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:20:02.301444    8124 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0341279s)
	W0604 16:20:02.301736    8124 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:20:02.301767    8124 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:02.312685    8124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:20:02.319842    8124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:20:03.417441    8124 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:20:03.417441    8124 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0975873s)
	I0604 16:20:03.417441    8124 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:03.656105    8124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:20:04.740235    8124 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:20:04.740281    8124 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0840482s)
	W0604 16:20:04.740315    8124 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:20:04.740315    8124 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:04.740315    8124 start.go:134] duration metric: createHost completed in 13.3021392s
	I0604 16:20:04.751776    8124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:20:04.758288    8124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:20:05.831580    8124 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:20:05.831580    8124 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0732801s)
	I0604 16:20:05.831580    8124 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:06.087595    8124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:20:07.180995    8124 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:20:07.180995    8124 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0933879s)
	W0604 16:20:07.180995    8124 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:20:07.180995    8124 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:07.190858    8124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:20:07.198795    8124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:20:08.311846    8124 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:20:08.311846    8124 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.1130386s)
	I0604 16:20:08.311846    8124 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:08.524232    8124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:20:09.591837    8124 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:20:09.591837    8124 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0675936s)
	W0604 16:20:09.591837    8124 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:20:09.591837    8124 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:09.591837    8124 fix.go:57] fixHost completed within 46.280697s
	I0604 16:20:09.591837    8124 start.go:81] releasing machines lock for "old-k8s-version-20220604161852-5712", held for 46.2809005s
	W0604 16:20:09.591837    8124 out.go:239] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20220604161852-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20220604161852-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	
	I0604 16:20:09.596783    8124 out.go:177] 
	W0604 16:20:09.598840    8124 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	
	W0604 16:20:09.598840    8124 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:20:09.599791    8124 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:20:09.606812    8124 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20220604161852-5712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220604161852-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220604161852-5712: exit status 1 (1.1346978s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (2.9244873s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:13.772530    6592 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220604161852-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (81.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220604161913-5712 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p embed-certs-20220604161913-5712 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m17.8999348s)

                                                
                                                
-- stdout --
	* [embed-certs-20220604161913-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node embed-certs-20220604161913-5712 in cluster embed-certs-20220604161913-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-20220604161913-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:19:13.980898    6336 out.go:296] Setting OutFile to fd 1784 ...
	I0604 16:19:14.048884    6336 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:19:14.048884    6336 out.go:309] Setting ErrFile to fd 1808...
	I0604 16:19:14.048884    6336 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:19:14.060022    6336 out.go:303] Setting JSON to false
	I0604 16:19:14.062700    6336 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10626,"bootTime":1654348928,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:19:14.062700    6336 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:19:14.073197    6336 out.go:177] * [embed-certs-20220604161913-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:19:14.076505    6336 notify.go:193] Checking for updates...
	I0604 16:19:14.079697    6336 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:19:14.082220    6336 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:19:14.084467    6336 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:19:14.089214    6336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:19:14.092849    6336 config.go:178] Loaded profile config "cert-expiration-20220604161540-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:19:14.093410    6336 config.go:178] Loaded profile config "missing-upgrade-20220604161559-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0604 16:19:14.093827    6336 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:19:14.094266    6336 config.go:178] Loaded profile config "old-k8s-version-20220604161852-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0604 16:19:14.094357    6336 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:19:16.764374    6336 docker.go:137] docker version: linux-20.10.16
	I0604 16:19:16.772373    6336 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:19:18.779615    6336 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0070143s)
	I0604 16:19:18.780639    6336 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:19:17.7841255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:19:18.783892    6336 out.go:177] * Using the docker driver based on user configuration
	I0604 16:19:18.786612    6336 start.go:284] selected driver: docker
	I0604 16:19:18.786612    6336 start.go:806] validating driver "docker" against <nil>
	I0604 16:19:18.787145    6336 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:19:19.833961    6336 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:19:21.814781    6336 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9807989s)
	I0604 16:19:21.815261    6336 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:45 OomKillDisable:true NGoroutines:48 SystemTime:2022-06-04 16:19:20.8412598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:19:21.815672    6336 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:19:21.816317    6336 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:19:21.818888    6336 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:19:21.823342    6336 cni.go:95] Creating CNI manager for ""
	I0604 16:19:21.823432    6336 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:19:21.823432    6336 start_flags.go:306] config:
	{Name:embed-certs-20220604161913-5712 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220604161913-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:19:21.827818    6336 out.go:177] * Starting control plane node embed-certs-20220604161913-5712 in cluster embed-certs-20220604161913-5712
	I0604 16:19:21.830553    6336 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:19:21.833003    6336 out.go:177] * Pulling base image ...
	I0604 16:19:21.835923    6336 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:19:21.835923    6336 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:19:21.835923    6336 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:19:21.835923    6336 cache.go:57] Caching tarball of preloaded images
	I0604 16:19:21.836664    6336 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:19:21.836989    6336 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:19:21.837205    6336 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20220604161913-5712\config.json ...
	I0604 16:19:21.837431    6336 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20220604161913-5712\config.json: {Name:mk6e025065828c6be025284d4990abfe2a364bb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:19:22.904174    6336 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:19:22.904494    6336 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:19:22.904634    6336 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:19:22.904634    6336 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:19:22.904634    6336 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:19:22.904634    6336 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:19:22.904634    6336 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:19:22.904634    6336 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:19:22.904634    6336 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:19:25.264195    6336 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:19:25.264195    6336 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:19:25.264195    6336 start.go:352] acquiring machines lock for embed-certs-20220604161913-5712: {Name:mkcc405ffbb18d72833c60c092ab314d3a46ad85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:25.265205    6336 start.go:356] acquired machines lock for "embed-certs-20220604161913-5712" in 1.0105ms
	I0604 16:19:25.266203    6336 start.go:91] Provisioning new machine with config: &{Name:embed-certs-20220604161913-5712 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220604161913-5712 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:19:25.266203    6336 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:19:25.271196    6336 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:19:25.271196    6336 start.go:165] libmachine.API.Create for "embed-certs-20220604161913-5712" (driver="docker")
	I0604 16:19:25.271196    6336 client.go:168] LocalClient.Create starting
	I0604 16:19:25.271196    6336 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:19:25.272201    6336 main.go:134] libmachine: Decoding PEM data...
	I0604 16:19:25.272201    6336 main.go:134] libmachine: Parsing certificate...
	I0604 16:19:25.272201    6336 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:19:25.272201    6336 main.go:134] libmachine: Decoding PEM data...
	I0604 16:19:25.272201    6336 main.go:134] libmachine: Parsing certificate...
	I0604 16:19:25.280210    6336 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:19:26.358877    6336 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:19:26.358877    6336 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0784857s)
	I0604 16:19:26.367942    6336 network_create.go:272] running [docker network inspect embed-certs-20220604161913-5712] to gather additional debugging logs...
	I0604 16:19:26.367942    6336 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712
	W0604 16:19:27.474669    6336 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:19:27.474669    6336 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712: (1.1067155s)
	I0604 16:19:27.474669    6336 network_create.go:275] error running [docker network inspect embed-certs-20220604161913-5712]: docker network inspect embed-certs-20220604161913-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220604161913-5712
	I0604 16:19:27.474669    6336 network_create.go:277] output of [docker network inspect embed-certs-20220604161913-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220604161913-5712
	
	** /stderr **
	I0604 16:19:27.483467    6336 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:19:28.592514    6336 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1088609s)
	I0604 16:19:28.612953    6336 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000588c08] misses:0}
	I0604 16:19:28.612953    6336 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:19:28.612953    6336 network_create.go:115] attempt to create docker network embed-certs-20220604161913-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:19:28.621639    6336 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712
	W0604 16:19:29.786229    6336 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:19:29.786229    6336 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: (1.1645778s)
	E0604 16:19:29.786229    6336 network_create.go:104] error while trying to create docker network embed-certs-20220604161913-5712 192.168.49.0/24: create docker network embed-certs-20220604161913-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 527660294d68913a59e26582d64a3770f159bedc16567134a8d0d64ae1399448 (br-527660294d68): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:19:29.786229    6336 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220604161913-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 527660294d68913a59e26582d64a3770f159bedc16567134a8d0d64ae1399448 (br-527660294d68): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220604161913-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 527660294d68913a59e26582d64a3770f159bedc16567134a8d0d64ae1399448 (br-527660294d68): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:19:29.800237    6336 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:19:30.859243    6336 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0589202s)
	I0604 16:19:30.866619    6336 cli_runner.go:164] Run: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:19:31.938421    6336 cli_runner.go:211] docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:19:31.938655    6336 cli_runner.go:217] Completed: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0716104s)
	I0604 16:19:31.938722    6336 client.go:171] LocalClient.Create took 6.6674552s
	I0604 16:19:33.948915    6336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:19:33.957910    6336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:19:35.007832    6336 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:19:35.007894    6336 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.049675s)
	I0604 16:19:35.007894    6336 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:35.306855    6336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:19:36.363253    6336 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:19:36.363253    6336 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0563864s)
	W0604 16:19:36.363253    6336 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:19:36.363253    6336 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:36.374826    6336 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:19:36.382843    6336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:19:37.432577    6336 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:19:37.432577    6336 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0494358s)
	I0604 16:19:37.432577    6336 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:37.738139    6336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:19:38.804800    6336 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:19:38.804800    6336 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0666499s)
	W0604 16:19:38.804800    6336 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:19:38.804800    6336 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:38.804800    6336 start.go:134] duration metric: createHost completed in 13.5384531s
	I0604 16:19:38.804800    6336 start.go:81] releasing machines lock for "embed-certs-20220604161913-5712", held for 13.5394505s
	W0604 16:19:38.804800    6336 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	I0604 16:19:38.820810    6336 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:19:39.896218    6336 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:39.896218    6336 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0753965s)
	I0604 16:19:39.896218    6336 delete.go:82] Unable to get host status for embed-certs-20220604161913-5712, assuming it has already been deleted: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	W0604 16:19:39.896218    6336 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	
	I0604 16:19:39.896218    6336 start.go:614] Will try again in 5 seconds ...
	I0604 16:19:44.904778    6336 start.go:352] acquiring machines lock for embed-certs-20220604161913-5712: {Name:mkcc405ffbb18d72833c60c092ab314d3a46ad85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:44.904778    6336 start.go:356] acquired machines lock for "embed-certs-20220604161913-5712" in 0s
	I0604 16:19:44.905324    6336 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:19:44.905442    6336 fix.go:55] fixHost starting: 
	I0604 16:19:44.917651    6336 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:19:45.966015    6336 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:45.966015    6336 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0482106s)
	I0604 16:19:45.966015    6336 fix.go:103] recreateIfNeeded on embed-certs-20220604161913-5712: state= err=unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:45.966015    6336 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:19:45.970337    6336 out.go:177] * docker "embed-certs-20220604161913-5712" container is missing, will recreate.
	I0604 16:19:45.972852    6336 delete.go:124] DEMOLISHING embed-certs-20220604161913-5712 ...
	I0604 16:19:45.990604    6336 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:19:47.070829    6336 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:47.070829    6336 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0800892s)
	W0604 16:19:47.070829    6336 stop.go:75] unable to get state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:47.070829    6336 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:47.086173    6336 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:19:48.150210    6336 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:48.150210    6336 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0638976s)
	I0604 16:19:48.150210    6336 delete.go:82] Unable to get host status for embed-certs-20220604161913-5712, assuming it has already been deleted: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:48.157303    6336 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220604161913-5712
	W0604 16:19:49.227568    6336 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:19:49.227568    6336 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220604161913-5712: (1.0702534s)
	I0604 16:19:49.227568    6336 kic.go:356] could not find the container embed-certs-20220604161913-5712 to remove it. will try anyways
	I0604 16:19:49.237087    6336 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:19:50.290189    6336 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:50.290189    6336 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0530901s)
	W0604 16:19:50.290189    6336 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:50.297189    6336 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-20220604161913-5712 /bin/bash -c "sudo init 0"
	W0604 16:19:51.325352    6336 cli_runner.go:211] docker exec --privileged -t embed-certs-20220604161913-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:19:51.325548    6336 cli_runner.go:217] Completed: docker exec --privileged -t embed-certs-20220604161913-5712 /bin/bash -c "sudo init 0": (1.0281518s)
	I0604 16:19:51.325601    6336 oci.go:625] error shutdown embed-certs-20220604161913-5712: docker exec --privileged -t embed-certs-20220604161913-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:52.342020    6336 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:19:53.415078    6336 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:53.415158    6336 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0727566s)
	I0604 16:19:53.415158    6336 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:53.415158    6336 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:19:53.415158    6336 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:53.895705    6336 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:19:55.034514    6336 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:55.034514    6336 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1387974s)
	I0604 16:19:55.034514    6336 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:55.034514    6336 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:19:55.034514    6336 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:55.948833    6336 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:19:57.050847    6336 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:57.050847    6336 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1020014s)
	I0604 16:19:57.050847    6336 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:57.050847    6336 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:19:57.050847    6336 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:57.708892    6336 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:19:58.781693    6336 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:58.781693    6336 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0723264s)
	I0604 16:19:58.781693    6336 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:58.781693    6336 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:19:58.781693    6336 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:19:59.896805    6336 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:20:00.954972    6336 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:00.954972    6336 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0570372s)
	I0604 16:20:00.955051    6336 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:00.955250    6336 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:20:00.955316    6336 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:02.482682    6336 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:20:03.617582    6336 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:03.617646    6336 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1348289s)
	I0604 16:20:03.617673    6336 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:03.617673    6336 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:20:03.617673    6336 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:06.672660    6336 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:20:07.795276    6336 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:07.795276    6336 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1226041s)
	I0604 16:20:07.795276    6336 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:07.795276    6336 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:20:07.795276    6336 oci.go:88] couldn't shut down embed-certs-20220604161913-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	 
	I0604 16:20:07.802271    6336 cli_runner.go:164] Run: docker rm -f -v embed-certs-20220604161913-5712
	I0604 16:20:08.897267    6336 cli_runner.go:217] Completed: docker rm -f -v embed-certs-20220604161913-5712: (1.0949844s)
	I0604 16:20:08.903309    6336 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220604161913-5712
	W0604 16:20:09.998123    6336 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:09.998123    6336 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220604161913-5712: (1.0948026s)
	I0604 16:20:10.006135    6336 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:20:11.112411    6336 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:20:11.112411    6336 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1062641s)
	I0604 16:20:11.117682    6336 network_create.go:272] running [docker network inspect embed-certs-20220604161913-5712] to gather additional debugging logs...
	I0604 16:20:11.117682    6336 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712
	W0604 16:20:12.185700    6336 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:12.185700    6336 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712: (1.0680061s)
	I0604 16:20:12.185700    6336 network_create.go:275] error running [docker network inspect embed-certs-20220604161913-5712]: docker network inspect embed-certs-20220604161913-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220604161913-5712
	I0604 16:20:12.185700    6336 network_create.go:277] output of [docker network inspect embed-certs-20220604161913-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220604161913-5712
	
	** /stderr **
	W0604 16:20:12.186968    6336 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:20:12.186968    6336 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:20:13.197851    6336 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:20:13.202391    6336 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:20:13.202558    6336 start.go:165] libmachine.API.Create for "embed-certs-20220604161913-5712" (driver="docker")
	I0604 16:20:13.202558    6336 client.go:168] LocalClient.Create starting
	I0604 16:20:13.203164    6336 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:20:13.203164    6336 main.go:134] libmachine: Decoding PEM data...
	I0604 16:20:13.203164    6336 main.go:134] libmachine: Parsing certificate...
	I0604 16:20:13.203727    6336 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:20:13.203819    6336 main.go:134] libmachine: Decoding PEM data...
	I0604 16:20:13.203819    6336 main.go:134] libmachine: Parsing certificate...
	I0604 16:20:13.211705    6336 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:20:14.301266    6336 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:20:14.301266    6336 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0895486s)
	I0604 16:20:14.309646    6336 network_create.go:272] running [docker network inspect embed-certs-20220604161913-5712] to gather additional debugging logs...
	I0604 16:20:14.309646    6336 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712
	W0604 16:20:15.386914    6336 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:15.386914    6336 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712: (1.0772563s)
	I0604 16:20:15.386914    6336 network_create.go:275] error running [docker network inspect embed-certs-20220604161913-5712]: docker network inspect embed-certs-20220604161913-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220604161913-5712
	I0604 16:20:15.386914    6336 network_create.go:277] output of [docker network inspect embed-certs-20220604161913-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220604161913-5712
	
	** /stderr **
	I0604 16:20:15.393944    6336 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:20:16.488551    6336 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0945952s)
	I0604 16:20:16.507009    6336 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000588c08] amended:false}} dirty:map[] misses:0}
	I0604 16:20:16.507009    6336 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:20:16.521632    6336 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000588c08] amended:true}} dirty:map[192.168.49.0:0xc000588c08 192.168.58.0:0xc000588ea8] misses:0}
	I0604 16:20:16.521632    6336 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:20:16.521632    6336 network_create.go:115] attempt to create docker network embed-certs-20220604161913-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:20:16.529550    6336 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712
	W0604 16:20:17.603453    6336 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:17.603518    6336 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: (1.0728572s)
	E0604 16:20:17.603518    6336 network_create.go:104] error while trying to create docker network embed-certs-20220604161913-5712 192.168.58.0/24: create docker network embed-certs-20220604161913-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0cc6c03c04e3b9aa323ea854e5f29bc21ba1163e840d3868c45dc304a1317574 (br-0cc6c03c04e3): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:20:17.603518    6336 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220604161913-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0cc6c03c04e3b9aa323ea854e5f29bc21ba1163e840d3868c45dc304a1317574 (br-0cc6c03c04e3): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220604161913-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0cc6c03c04e3b9aa323ea854e5f29bc21ba1163e840d3868c45dc304a1317574 (br-0cc6c03c04e3): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:20:17.622477    6336 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:20:18.720669    6336 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0981808s)
	I0604 16:20:18.728668    6336 cli_runner.go:164] Run: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:20:19.795249    6336 cli_runner.go:211] docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:20:19.795249    6336 cli_runner.go:217] Completed: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0665694s)
	I0604 16:20:19.795249    6336 client.go:171] LocalClient.Create took 6.59262s
	I0604 16:20:21.815505    6336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:20:21.821602    6336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:20:22.938406    6336 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:22.938406    6336 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.1167916s)
	I0604 16:20:22.938406    6336 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:23.279828    6336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:20:24.362881    6336 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:24.363016    6336 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0829712s)
	W0604 16:20:24.363016    6336 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:20:24.363016    6336 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:24.378339    6336 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:20:24.387633    6336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:20:25.461421    6336 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:25.461421    6336 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0736369s)
	I0604 16:20:25.461421    6336 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:25.698597    6336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:20:26.787727    6336 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:26.787727    6336 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0890179s)
	W0604 16:20:26.787727    6336 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:20:26.787727    6336 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:26.787727    6336 start.go:134] duration metric: createHost completed in 13.5895757s
	I0604 16:20:26.797728    6336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:20:26.803752    6336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:20:27.874034    6336 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:27.874034    6336 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0702709s)
	I0604 16:20:27.874034    6336 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:28.136247    6336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:20:29.246908    6336 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:29.246908    6336 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.110649s)
	W0604 16:20:29.247174    6336 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:20:29.247174    6336 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:29.257495    6336 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:20:29.262932    6336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:20:30.338817    6336 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:30.338817    6336 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0758727s)
	I0604 16:20:30.338817    6336 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:30.552399    6336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:20:31.590572    6336 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:31.590572    6336 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0381624s)
	W0604 16:20:31.590572    6336 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:20:31.590572    6336 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:31.590572    6336 fix.go:57] fixHost completed within 46.6846264s
	I0604 16:20:31.590572    6336 start.go:81] releasing machines lock for "embed-certs-20220604161913-5712", held for 46.6852898s
	W0604 16:20:31.591720    6336 out.go:239] * Failed to start docker container. Running "minikube delete -p embed-certs-20220604161913-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p embed-certs-20220604161913-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	
	I0604 16:20:31.595620    6336 out.go:177] 
	W0604 16:20:31.597858    6336 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	
	W0604 16:20:31.597858    6336 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:20:31.598400    6336 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:20:31.602056    6336 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p embed-certs-20220604161913-5712 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220604161913-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220604161913-5712: exit status 1 (1.1813929s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (2.9427586s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:35.840657    3304 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220604161913-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (82.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220604161933-5712 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20220604161933-5712 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m16.3718963s)

                                                
                                                
-- stdout --
	* [no-preload-20220604161933-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node no-preload-20220604161933-5712 in cluster no-preload-20220604161933-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-20220604161933-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:19:33.900919    8604 out.go:296] Setting OutFile to fd 1732 ...
	I0604 16:19:33.952934    8604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:19:33.952934    8604 out.go:309] Setting ErrFile to fd 1864...
	I0604 16:19:33.952934    8604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:19:33.966943    8604 out.go:303] Setting JSON to false
	I0604 16:19:33.969910    8604 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10646,"bootTime":1654348927,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:19:33.969910    8604 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:19:33.979912    8604 out.go:177] * [no-preload-20220604161933-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:19:33.985920    8604 notify.go:193] Checking for updates...
	I0604 16:19:33.989915    8604 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:19:33.991930    8604 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:19:33.994916    8604 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:19:33.998908    8604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:19:34.005922    8604 config.go:178] Loaded profile config "cert-expiration-20220604161540-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:19:34.005922    8604 config.go:178] Loaded profile config "embed-certs-20220604161913-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:19:34.005922    8604 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:19:34.006920    8604 config.go:178] Loaded profile config "old-k8s-version-20220604161852-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0604 16:19:34.006920    8604 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:19:36.597909    8604 docker.go:137] docker version: linux-20.10.16
	I0604 16:19:36.610927    8604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:19:38.647600    8604 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0366504s)
	I0604 16:19:38.648275    8604 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:19:37.649025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:19:38.651968    8604 out.go:177] * Using the docker driver based on user configuration
	I0604 16:19:38.653784    8604 start.go:284] selected driver: docker
	I0604 16:19:38.653784    8604 start.go:806] validating driver "docker" against <nil>
	I0604 16:19:38.653784    8604 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:19:38.721841    8604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:19:40.719672    8604 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.99781s)
	I0604 16:19:40.719672    8604 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:19:39.7270167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:19:40.719672    8604 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:19:40.720799    8604 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:19:40.724302    8604 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:19:40.726352    8604 cni.go:95] Creating CNI manager for ""
	I0604 16:19:40.726352    8604 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:19:40.726352    8604 start_flags.go:306] config:
	{Name:no-preload-20220604161933-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220604161933-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:19:40.729615    8604 out.go:177] * Starting control plane node no-preload-20220604161933-5712 in cluster no-preload-20220604161933-5712
	I0604 16:19:40.732057    8604 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:19:40.734405    8604 out.go:177] * Pulling base image ...
	I0604 16:19:40.737634    8604 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:19:40.737634    8604 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:19:40.737634    8604 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20220604161933-5712\config.json ...
	I0604 16:19:40.737634    8604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0604 16:19:40.738582    8604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6
	I0604 16:19:40.738582    8604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6
	I0604 16:19:40.738582    8604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6
	I0604 16:19:40.738582    8604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.5.1-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0
	I0604 16:19:40.738582    8604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20220604161933-5712\config.json: {Name:mkb42693934060b0e70397aec9e56c96d2eccf58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:19:40.738582    8604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns:v1.8.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6
	I0604 16:19:40.738582    8604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6
	I0604 16:19:40.738582    8604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6
	I0604 16:19:40.907199    8604 cache.go:107] acquiring lock: {Name:mka0a7f9fce0e132e7529c42bed359c919fc231b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:40.907199    8604 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:40.907422    8604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6 exists
	I0604 16:19:40.907422    8604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0604 16:19:40.907422    8604 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns\\coredns_v1.8.6" took 168.8384ms
	I0604 16:19:40.907422    8604 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6 succeeded
	I0604 16:19:40.907422    8604 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 168.8384ms
	I0604 16:19:40.907422    8604 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0604 16:19:40.907971    8604 cache.go:107] acquiring lock: {Name:mkb7d2f7b32c5276784ba454e50c746d7fc6c05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:40.908270    8604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6 exists
	I0604 16:19:40.908361    8604 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.6" took 169.7772ms
	I0604 16:19:40.908361    8604 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6 succeeded
	I0604 16:19:40.924979    8604 cache.go:107] acquiring lock: {Name:mk9255ee8c390126b963cceac501a1fcc40ecb6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:40.925134    8604 cache.go:107] acquiring lock: {Name:mk90a34f529b9ea089d74e18a271c58e34606f29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:40.925134    8604 cache.go:107] acquiring lock: {Name:mk1cf2f2eee53b81f1c95945c2dd3783d0c7d992 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:40.925220    8604 cache.go:107] acquiring lock: {Name:mk40b809628c4e9673e2a41bf9fb31b8a6b3529d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:40.925312    8604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6 exists
	I0604 16:19:40.925498    8604 cache.go:107] acquiring lock: {Name:mk3772b9dcb36c3cbc3aa4dfbe66c5266092e2c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:40.925498    8604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6 exists
	I0604 16:19:40.925410    8604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6 exists
	I0604 16:19:40.925658    8604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6 exists
	I0604 16:19:40.925498    8604 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.23.6" took 186.9143ms
	I0604 16:19:40.925718    8604 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6 succeeded
	I0604 16:19:40.925718    8604 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.23.6" took 187.0747ms
	I0604 16:19:40.925804    8604 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6 succeeded
	I0604 16:19:40.925718    8604 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.23.6" took 187.0747ms
	I0604 16:19:40.925862    8604 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6 succeeded
	I0604 16:19:40.925804    8604 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.23.6" took 187.2199ms
	I0604 16:19:40.925862    8604 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6 succeeded
	I0604 16:19:40.925862    8604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0 exists
	I0604 16:19:40.925862    8604 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.5.1-0" took 187.2786ms
	I0604 16:19:40.925862    8604 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0 succeeded
	I0604 16:19:40.925862    8604 cache.go:87] Successfully saved all images to host disk.
	I0604 16:19:41.819197    8604 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:19:41.819501    8604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:19:41.819771    8604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:19:41.819771    8604 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:19:41.819771    8604 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:19:41.819771    8604 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:19:41.819771    8604 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:19:41.819771    8604 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:19:41.820350    8604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:19:44.046378    8604 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:19:44.046508    8604 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:19:44.046600    8604 start.go:352] acquiring machines lock for no-preload-20220604161933-5712: {Name:mkb9157c767b2183b064e561f5ba73bb0b5648b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:19:44.046850    8604 start.go:356] acquired machines lock for "no-preload-20220604161933-5712" in 214.8µs
	I0604 16:19:44.047163    8604 start.go:91] Provisioning new machine with config: &{Name:no-preload-20220604161933-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220604161933-5712 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:19:44.047289    8604 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:19:44.050640    8604 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:19:44.051399    8604 start.go:165] libmachine.API.Create for "no-preload-20220604161933-5712" (driver="docker")
	I0604 16:19:44.051585    8604 client.go:168] LocalClient.Create starting
	I0604 16:19:44.051652    8604 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:19:44.052521    8604 main.go:134] libmachine: Decoding PEM data...
	I0604 16:19:44.052588    8604 main.go:134] libmachine: Parsing certificate...
	I0604 16:19:44.052811    8604 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:19:44.052811    8604 main.go:134] libmachine: Decoding PEM data...
	I0604 16:19:44.052811    8604 main.go:134] libmachine: Parsing certificate...
	I0604 16:19:44.065223    8604 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:19:45.091503    8604 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:19:45.091586    8604 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0262083s)
	I0604 16:19:45.099990    8604 network_create.go:272] running [docker network inspect no-preload-20220604161933-5712] to gather additional debugging logs...
	I0604 16:19:45.099990    8604 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712
	W0604 16:19:46.123577    8604 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:19:46.123577    8604 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712: (1.0235752s)
	I0604 16:19:46.123577    8604 network_create.go:275] error running [docker network inspect no-preload-20220604161933-5712]: docker network inspect no-preload-20220604161933-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220604161933-5712
	I0604 16:19:46.123577    8604 network_create.go:277] output of [docker network inspect no-preload-20220604161933-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220604161933-5712
	
	** /stderr **
	I0604 16:19:46.130575    8604 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:19:47.209242    8604 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0786546s)
	I0604 16:19:47.228253    8604 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000062f0] misses:0}
	I0604 16:19:47.228253    8604 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:19:47.228253    8604 network_create.go:115] attempt to create docker network no-preload-20220604161933-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:19:47.235285    8604 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712
	W0604 16:19:48.289847    8604 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:19:48.289847    8604 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: (1.0545509s)
	E0604 16:19:48.289847    8604 network_create.go:104] error while trying to create docker network no-preload-20220604161933-5712 192.168.49.0/24: create docker network no-preload-20220604161933-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network af9b26e35fb1daa164715ebddec52aef7a9f912c95256e64489283da6b0a8d6f (br-af9b26e35fb1): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:19:48.289847    8604 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220604161933-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network af9b26e35fb1daa164715ebddec52aef7a9f912c95256e64489283da6b0a8d6f (br-af9b26e35fb1): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220604161933-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network af9b26e35fb1daa164715ebddec52aef7a9f912c95256e64489283da6b0a8d6f (br-af9b26e35fb1): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:19:48.304851    8604 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:19:49.383088    8604 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0782253s)
	I0604 16:19:49.389088    8604 cli_runner.go:164] Run: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:19:50.449040    8604 cli_runner.go:211] docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:19:50.449040    8604 cli_runner.go:217] Completed: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0599399s)
	I0604 16:19:50.449040    8604 client.go:171] LocalClient.Create took 6.3973857s
	I0604 16:19:52.469279    8604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:19:52.476275    8604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:19:53.524660    8604 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:19:53.524660    8604 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0483729s)
	I0604 16:19:53.524660    8604 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:19:53.815005    8604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:19:54.864816    8604 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:19:54.864816    8604 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0497994s)
	W0604 16:19:54.864816    8604 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:19:54.864816    8604 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:19:54.877181    8604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:19:54.884213    8604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:19:55.970394    8604 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:19:55.970575    8604 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0861687s)
	I0604 16:19:55.970575    8604 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:19:56.269112    8604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:19:57.351777    8604 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:19:57.351877    8604 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0825628s)
	W0604 16:19:57.351926    8604 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:19:57.351926    8604 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:19:57.351926    8604 start.go:134] duration metric: createHost completed in 13.3044933s
	I0604 16:19:57.351926    8604 start.go:81] releasing machines lock for "no-preload-20220604161933-5712", held for 13.304826s
	W0604 16:19:57.351926    8604 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	I0604 16:19:57.366654    8604 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:19:58.418867    8604 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:19:58.418867    8604 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0510844s)
	I0604 16:19:58.418867    8604 delete.go:82] Unable to get host status for no-preload-20220604161933-5712, assuming it has already been deleted: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	W0604 16:19:58.418867    8604 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	
	I0604 16:19:58.418867    8604 start.go:614] Will try again in 5 seconds ...
	I0604 16:20:03.433786    8604 start.go:352] acquiring machines lock for no-preload-20220604161933-5712: {Name:mkb9157c767b2183b064e561f5ba73bb0b5648b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:20:03.434097    8604 start.go:356] acquired machines lock for "no-preload-20220604161933-5712" in 214.5µs
	I0604 16:20:03.434266    8604 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:20:03.434266    8604 fix.go:55] fixHost starting: 
	I0604 16:20:03.450926    8604 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:20:04.539782    8604 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:04.539782    8604 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0888441s)
	I0604 16:20:04.539782    8604 fix.go:103] recreateIfNeeded on no-preload-20220604161933-5712: state= err=unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:04.539782    8604 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:20:04.542788    8604 out.go:177] * docker "no-preload-20220604161933-5712" container is missing, will recreate.
	I0604 16:20:04.545794    8604 delete.go:124] DEMOLISHING no-preload-20220604161933-5712 ...
	I0604 16:20:04.558741    8604 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:20:05.625760    8604 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:05.625760    8604 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0670075s)
	W0604 16:20:05.625760    8604 stop.go:75] unable to get state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:05.625760    8604 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:05.639762    8604 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:20:06.722933    8604 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:06.722933    8604 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0831601s)
	I0604 16:20:06.722933    8604 delete.go:82] Unable to get host status for no-preload-20220604161933-5712, assuming it has already been deleted: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:06.732554    8604 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220604161933-5712
	W0604 16:20:07.826328    8604 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:07.826328    8604 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220604161933-5712: (1.0937623s)
	I0604 16:20:07.826328    8604 kic.go:356] could not find the container no-preload-20220604161933-5712 to remove it. will try anyways
	I0604 16:20:07.833293    8604 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:20:08.881254    8604 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:08.881254    8604 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0479501s)
	W0604 16:20:08.881254    8604 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:08.888267    8604 cli_runner.go:164] Run: docker exec --privileged -t no-preload-20220604161933-5712 /bin/bash -c "sudo init 0"
	W0604 16:20:10.030091    8604 cli_runner.go:211] docker exec --privileged -t no-preload-20220604161933-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:20:10.030091    8604 cli_runner.go:217] Completed: docker exec --privileged -t no-preload-20220604161933-5712 /bin/bash -c "sudo init 0": (1.1418125s)
	I0604 16:20:10.030091    8604 oci.go:625] error shutdown no-preload-20220604161933-5712: docker exec --privileged -t no-preload-20220604161933-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:11.046941    8604 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:20:12.169663    8604 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:12.169663    8604 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.1227096s)
	I0604 16:20:12.169663    8604 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:12.169663    8604 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:20:12.169663    8604 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:12.654704    8604 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:20:13.740783    8604 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:13.740783    8604 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0850317s)
	I0604 16:20:13.740783    8604 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:13.740783    8604 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:20:13.740783    8604 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:14.641788    8604 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:20:15.748208    8604 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:15.748208    8604 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.1064073s)
	I0604 16:20:15.748208    8604 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:15.748208    8604 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:20:15.748208    8604 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:16.404332    8604 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:20:17.479616    8604 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:17.479616    8604 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0752717s)
	I0604 16:20:17.479616    8604 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:17.479616    8604 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:20:17.479616    8604 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:18.608199    8604 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:20:19.716234    8604 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:19.716234    8604 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.1080235s)
	I0604 16:20:19.716234    8604 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:19.716234    8604 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:20:19.716234    8604 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:21.244985    8604 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:20:22.304700    8604 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:22.304700    8604 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0597039s)
	I0604 16:20:22.304700    8604 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:22.304700    8604 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:20:22.304700    8604 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:25.362550    8604 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:20:26.477895    8604 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:26.477957    8604 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.1152394s)
	I0604 16:20:26.477957    8604 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:26.477957    8604 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:20:26.477957    8604 oci.go:88] couldn't shut down no-preload-20220604161933-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	 
	I0604 16:20:26.488923    8604 cli_runner.go:164] Run: docker rm -f -v no-preload-20220604161933-5712
	I0604 16:20:27.570345    8604 cli_runner.go:217] Completed: docker rm -f -v no-preload-20220604161933-5712: (1.0814095s)
	I0604 16:20:27.578722    8604 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220604161933-5712
	W0604 16:20:28.647880    8604 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:28.647880    8604 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220604161933-5712: (1.069147s)
	I0604 16:20:28.655107    8604 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:20:29.742441    8604 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:20:29.742625    8604 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0872265s)
	I0604 16:20:29.750143    8604 network_create.go:272] running [docker network inspect no-preload-20220604161933-5712] to gather additional debugging logs...
	I0604 16:20:29.750143    8604 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712
	W0604 16:20:30.822549    8604 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:30.822692    8604 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712: (1.0716708s)
	I0604 16:20:30.822692    8604 network_create.go:275] error running [docker network inspect no-preload-20220604161933-5712]: docker network inspect no-preload-20220604161933-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220604161933-5712
	I0604 16:20:30.822692    8604 network_create.go:277] output of [docker network inspect no-preload-20220604161933-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220604161933-5712
	
	** /stderr **
	W0604 16:20:30.823493    8604 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:20:30.824022    8604 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:20:31.838568    8604 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:20:31.850064    8604 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:20:31.850064    8604 start.go:165] libmachine.API.Create for "no-preload-20220604161933-5712" (driver="docker")
	I0604 16:20:31.850064    8604 client.go:168] LocalClient.Create starting
	I0604 16:20:31.850064    8604 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:20:31.850064    8604 main.go:134] libmachine: Decoding PEM data...
	I0604 16:20:31.851039    8604 main.go:134] libmachine: Parsing certificate...
	I0604 16:20:31.851039    8604 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:20:31.851039    8604 main.go:134] libmachine: Decoding PEM data...
	I0604 16:20:31.851039    8604 main.go:134] libmachine: Parsing certificate...
	I0604 16:20:31.860021    8604 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:20:32.960810    8604 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:20:32.960810    8604 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1007771s)
	I0604 16:20:32.968774    8604 network_create.go:272] running [docker network inspect no-preload-20220604161933-5712] to gather additional debugging logs...
	I0604 16:20:32.968774    8604 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712
	W0604 16:20:34.003017    8604 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:34.003017    8604 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712: (1.0342319s)
	I0604 16:20:34.003017    8604 network_create.go:275] error running [docker network inspect no-preload-20220604161933-5712]: docker network inspect no-preload-20220604161933-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220604161933-5712
	I0604 16:20:34.003017    8604 network_create.go:277] output of [docker network inspect no-preload-20220604161933-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220604161933-5712
	
	** /stderr **
	I0604 16:20:34.009010    8604 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:20:35.051406    8604 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0423845s)
	I0604 16:20:35.069346    8604 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062f0] amended:false}} dirty:map[] misses:0}
	I0604 16:20:35.069346    8604 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:20:35.085600    8604 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062f0] amended:true}} dirty:map[192.168.49.0:0xc0000062f0 192.168.58.0:0xc0010b41a8] misses:0}
	I0604 16:20:35.085600    8604 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:20:35.085600    8604 network_create.go:115] attempt to create docker network no-preload-20220604161933-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:20:35.093817    8604 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712
	W0604 16:20:36.152851    8604 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:36.152851    8604 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: (1.0590218s)
	E0604 16:20:36.152851    8604 network_create.go:104] error while trying to create docker network no-preload-20220604161933-5712 192.168.58.0/24: create docker network no-preload-20220604161933-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b32b63252dc4d9f1c582be8a80d8e66ea094ae867d35d9a7d7a8db8966511535 (br-b32b63252dc4): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:20:36.152851    8604 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220604161933-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b32b63252dc4d9f1c582be8a80d8e66ea094ae867d35d9a7d7a8db8966511535 (br-b32b63252dc4): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220604161933-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b32b63252dc4d9f1c582be8a80d8e66ea094ae867d35d9a7d7a8db8966511535 (br-b32b63252dc4): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:20:36.166883    8604 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:20:37.246153    8604 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0792582s)
	I0604 16:20:37.253190    8604 cli_runner.go:164] Run: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:20:38.331368    8604 cli_runner.go:211] docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:20:38.331368    8604 cli_runner.go:217] Completed: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0781664s)
	I0604 16:20:38.331368    8604 client.go:171] LocalClient.Create took 6.4812346s
	I0604 16:20:40.348076    8604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:20:40.355523    8604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:20:41.423695    8604 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:41.423695    8604 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0680972s)
	I0604 16:20:41.423695    8604 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:41.762242    8604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:20:42.830124    8604 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:42.830124    8604 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0678699s)
	W0604 16:20:42.830124    8604 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:20:42.830124    8604 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:42.840722    8604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:20:42.847775    8604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:20:43.913794    8604 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:43.913794    8604 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0660078s)
	I0604 16:20:43.913794    8604 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:44.142711    8604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:20:45.221072    8604 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:45.221072    8604 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0781069s)
	W0604 16:20:45.221348    8604 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:20:45.221414    8604 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:45.221483    8604 start.go:134] duration metric: createHost completed in 13.3826174s
	I0604 16:20:45.234233    8604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:20:45.242087    8604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:20:46.291141    8604 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:46.291141    8604 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0490424s)
	I0604 16:20:46.291141    8604 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:46.553728    8604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:20:47.660823    8604 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:47.660823    8604 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.1061213s)
	W0604 16:20:47.660823    8604 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:20:47.660823    8604 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:47.672597    8604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:20:47.679064    8604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:20:48.749197    8604 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:48.749197    8604 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.070121s)
	I0604 16:20:48.749197    8604 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:48.962453    8604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:20:49.996161    8604 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:20:49.996161    8604 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0336971s)
	W0604 16:20:49.996161    8604 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:20:49.996161    8604 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:20:49.996161    8604 fix.go:57] fixHost completed within 46.5613918s
	I0604 16:20:49.996161    8604 start.go:81] releasing machines lock for "no-preload-20220604161933-5712", held for 46.5615608s
	W0604 16:20:49.996161    8604 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-20220604161933-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p no-preload-20220604161933-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	
	I0604 16:20:50.001184    8604 out.go:177] 
	W0604 16:20:50.005001    8604 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	
	W0604 16:20:50.005350    8604 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:20:50.005350    8604 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:20:50.008763    8604 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p no-preload-20220604161933-5712 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220604161933-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220604161933-5712: exit status 1 (1.1521004s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (3.0724159s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:54.330788    2272 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220604161933-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (80.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220604161852-5712 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220604161852-5712 create -f testdata\busybox.yaml: exit status 1 (273.5084ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220604161852-5712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context old-k8s-version-20220604161852-5712 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220604161852-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220604161852-5712: exit status 1 (1.1374119s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (2.9268256s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:18.128451    8436 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220604161852-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220604161852-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220604161852-5712: exit status 1 (1.1205742s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (2.9371251s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:22.194162    8848 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220604161852-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (7.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220604161852-5712 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220604161852-5712 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.9079783s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220604161852-5712 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220604161852-5712 describe deploy/metrics-server -n kube-system: exit status 1 (231.6026ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220604161852-5712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220604161852-5712 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220604161852-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220604161852-5712: exit status 1 (1.1286827s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (2.9143024s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:29.402529    6272 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220604161852-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (7.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (26.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20220604161852-5712 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p old-k8s-version-20220604161852-5712 --alsologtostderr -v=3: exit status 82 (22.756323s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-20220604161852-5712"  ...
	* Stopping node "old-k8s-version-20220604161852-5712"  ...
	* Stopping node "old-k8s-version-20220604161852-5712"  ...
	* Stopping node "old-k8s-version-20220604161852-5712"  ...
	* Stopping node "old-k8s-version-20220604161852-5712"  ...
	* Stopping node "old-k8s-version-20220604161852-5712"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:20:29.718338    7164 out.go:296] Setting OutFile to fd 1748 ...
	I0604 16:20:29.779485    7164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:20:29.779485    7164 out.go:309] Setting ErrFile to fd 1916...
	I0604 16:20:29.779485    7164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:20:29.802571    7164 out.go:303] Setting JSON to false
	I0604 16:20:29.803365    7164 daemonize_windows.go:44] trying to kill existing schedule stop for profile old-k8s-version-20220604161852-5712...
	I0604 16:20:29.814933    7164 ssh_runner.go:195] Run: systemctl --version
	I0604 16:20:29.823762    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:20:32.344492    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:20:32.344492    7164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (2.5205415s)
	I0604 16:20:32.356298    7164 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0604 16:20:32.363288    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:20:33.449062    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:20:33.449062    7164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0857624s)
	I0604 16:20:33.449062    7164 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:33.827198    7164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:20:34.880030    7164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:20:34.880030    7164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0528208s)
	I0604 16:20:34.880030    7164 openrc.go:165] stop output: 
	E0604 16:20:34.880030    7164 daemonize_windows.go:38] error terminating scheduled stop for profile old-k8s-version-20220604161852-5712: stopping schedule-stop service for profile old-k8s-version-20220604161852-5712: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:34.880030    7164 mustload.go:65] Loading cluster: old-k8s-version-20220604161852-5712
	I0604 16:20:34.881027    7164 config.go:178] Loaded profile config "old-k8s-version-20220604161852-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0604 16:20:34.881027    7164 stop.go:39] StopHost: old-k8s-version-20220604161852-5712
	I0604 16:20:34.885023    7164 out.go:177] * Stopping node "old-k8s-version-20220604161852-5712"  ...
	I0604 16:20:34.901013    7164 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:20:35.965430    7164 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:35.965430    7164 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0644057s)
	W0604 16:20:35.965430    7164 stop.go:75] unable to get state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	W0604 16:20:35.965430    7164 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:35.965430    7164 retry.go:31] will retry after 937.714187ms: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:36.917074    7164 stop.go:39] StopHost: old-k8s-version-20220604161852-5712
	I0604 16:20:36.922213    7164 out.go:177] * Stopping node "old-k8s-version-20220604161852-5712"  ...
	I0604 16:20:36.936920    7164 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:20:38.036691    7164 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:38.036790    7164 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0994443s)
	W0604 16:20:38.036880    7164 stop.go:75] unable to get state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	W0604 16:20:38.036970    7164 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:38.037069    7164 retry.go:31] will retry after 1.386956246s: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:39.436955    7164 stop.go:39] StopHost: old-k8s-version-20220604161852-5712
	I0604 16:20:39.441423    7164 out.go:177] * Stopping node "old-k8s-version-20220604161852-5712"  ...
	I0604 16:20:39.459659    7164 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:20:40.537840    7164 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:40.537934    7164 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.07808s)
	W0604 16:20:40.537979    7164 stop.go:75] unable to get state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	W0604 16:20:40.537979    7164 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:40.537979    7164 retry.go:31] will retry after 2.670351914s: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:43.209012    7164 stop.go:39] StopHost: old-k8s-version-20220604161852-5712
	I0604 16:20:43.214316    7164 out.go:177] * Stopping node "old-k8s-version-20220604161852-5712"  ...
	I0604 16:20:43.230748    7164 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:20:44.307445    7164 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:44.307445    7164 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0766853s)
	W0604 16:20:44.307445    7164 stop.go:75] unable to get state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	W0604 16:20:44.307445    7164 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:44.307445    7164 retry.go:31] will retry after 1.909024939s: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:46.230103    7164 stop.go:39] StopHost: old-k8s-version-20220604161852-5712
	I0604 16:20:46.233891    7164 out.go:177] * Stopping node "old-k8s-version-20220604161852-5712"  ...
	I0604 16:20:46.253130    7164 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:20:47.347449    7164 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:47.347449    7164 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0942339s)
	W0604 16:20:47.347449    7164 stop.go:75] unable to get state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	W0604 16:20:47.347449    7164 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:47.347449    7164 retry.go:31] will retry after 3.323628727s: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:50.676574    7164 stop.go:39] StopHost: old-k8s-version-20220604161852-5712
	I0604 16:20:50.683553    7164 out.go:177] * Stopping node "old-k8s-version-20220604161852-5712"  ...
	I0604 16:20:50.698992    7164 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:20:51.834164    7164 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:51.834164    7164 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.1351594s)
	W0604 16:20:51.834164    7164 stop.go:75] unable to get state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	W0604 16:20:51.834164    7164 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:20:51.858383    7164 out.go:177] 
	W0604 16:20:51.860818    7164 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-20220604161852-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-20220604161852-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:20:51.860818    7164 out.go:239] * 
	* 
	W0604 16:20:52.143093    7164 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 16:20:52.145954    7164 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p old-k8s-version-20220604161852-5712 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220604161852-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220604161852-5712: exit status 1 (1.1269258s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (2.9956644s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:56.281344    7216 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220604161852-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (26.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220604161913-5712 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context embed-certs-20220604161913-5712 create -f testdata\busybox.yaml: exit status 1 (259.0554ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220604161913-5712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context embed-certs-20220604161913-5712 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220604161913-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220604161913-5712: exit status 1 (1.1322945s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (2.914809s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:40.162558    7532 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220604161913-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220604161913-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220604161913-5712: exit status 1 (1.1370886s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (2.8835038s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:44.196484    6912 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220604161913-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220604161913-5712 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220604161913-5712 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.9262464s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220604161913-5712 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context embed-certs-20220604161913-5712 describe deploy/metrics-server -n kube-system: exit status 1 (250.1506ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220604161913-5712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-20220604161913-5712 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220604161913-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220604161913-5712: exit status 1 (1.1131127s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (2.9643902s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:51.464708    8440 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220604161913-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (7.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (27.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20220604161913-5712 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p embed-certs-20220604161913-5712 --alsologtostderr -v=3: exit status 82 (23.2096965s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-20220604161913-5712"  ...
	* Stopping node "embed-certs-20220604161913-5712"  ...
	* Stopping node "embed-certs-20220604161913-5712"  ...
	* Stopping node "embed-certs-20220604161913-5712"  ...
	* Stopping node "embed-certs-20220604161913-5712"  ...
	* Stopping node "embed-certs-20220604161913-5712"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:20:51.730989    8236 out.go:296] Setting OutFile to fd 1908 ...
	I0604 16:20:51.804155    8236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:20:51.804155    8236 out.go:309] Setting ErrFile to fd 1592...
	I0604 16:20:51.804155    8236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:20:51.815153    8236 out.go:303] Setting JSON to false
	I0604 16:20:51.815153    8236 daemonize_windows.go:44] trying to kill existing schedule stop for profile embed-certs-20220604161913-5712...
	I0604 16:20:51.826156    8236 ssh_runner.go:195] Run: systemctl --version
	I0604 16:20:51.840234    8236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:20:54.503917    8236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:54.503917    8236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (2.6635805s)
	I0604 16:20:54.515448    8236 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0604 16:20:54.522727    8236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:20:55.606404    8236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:55.606404    8236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0836644s)
	I0604 16:20:55.606404    8236 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:55.975717    8236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:20:57.113122    8236 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:20:57.113257    8236 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.1372521s)
	I0604 16:20:57.113257    8236 openrc.go:165] stop output: 
	E0604 16:20:57.113257    8236 daemonize_windows.go:38] error terminating scheduled stop for profile embed-certs-20220604161913-5712: stopping schedule-stop service for profile embed-certs-20220604161913-5712: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:57.113257    8236 mustload.go:65] Loading cluster: embed-certs-20220604161913-5712
	I0604 16:20:57.113986    8236 config.go:178] Loaded profile config "embed-certs-20220604161913-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:20:57.114553    8236 stop.go:39] StopHost: embed-certs-20220604161913-5712
	I0604 16:20:57.122181    8236 out.go:177] * Stopping node "embed-certs-20220604161913-5712"  ...
	I0604 16:20:57.141619    8236 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:20:58.299096    8236 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:20:58.299164    8236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1571932s)
	W0604 16:20:58.299164    8236 stop.go:75] unable to get state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	W0604 16:20:58.299164    8236 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:58.299164    8236 retry.go:31] will retry after 937.714187ms: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:20:59.245086    8236 stop.go:39] StopHost: embed-certs-20220604161913-5712
	I0604 16:20:59.249874    8236 out.go:177] * Stopping node "embed-certs-20220604161913-5712"  ...
	I0604 16:20:59.273497    8236 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:00.378062    8236 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:00.378062    8236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1045536s)
	W0604 16:21:00.378062    8236 stop.go:75] unable to get state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	W0604 16:21:00.378062    8236 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:00.378062    8236 retry.go:31] will retry after 1.386956246s: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:01.766367    8236 stop.go:39] StopHost: embed-certs-20220604161913-5712
	I0604 16:21:01.771629    8236 out.go:177] * Stopping node "embed-certs-20220604161913-5712"  ...
	I0604 16:21:01.787021    8236 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:02.919437    8236 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:02.919485    8236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1321991s)
	W0604 16:21:02.919550    8236 stop.go:75] unable to get state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	W0604 16:21:02.919597    8236 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:02.919597    8236 retry.go:31] will retry after 2.670351914s: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:05.593705    8236 stop.go:39] StopHost: embed-certs-20220604161913-5712
	I0604 16:21:05.598524    8236 out.go:177] * Stopping node "embed-certs-20220604161913-5712"  ...
	I0604 16:21:05.629357    8236 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:06.724538    8236 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:06.724538    8236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0951046s)
	W0604 16:21:06.724538    8236 stop.go:75] unable to get state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	W0604 16:21:06.724538    8236 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:06.724538    8236 retry.go:31] will retry after 1.909024939s: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:08.648515    8236 stop.go:39] StopHost: embed-certs-20220604161913-5712
	I0604 16:21:08.656377    8236 out.go:177] * Stopping node "embed-certs-20220604161913-5712"  ...
	I0604 16:21:08.673454    8236 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:09.875854    8236 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:09.876004    8236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.20223s)
	W0604 16:21:09.876004    8236 stop.go:75] unable to get state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	W0604 16:21:09.876004    8236 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:09.876133    8236 retry.go:31] will retry after 3.323628727s: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:13.209223    8236 stop.go:39] StopHost: embed-certs-20220604161913-5712
	I0604 16:21:13.215959    8236 out.go:177] * Stopping node "embed-certs-20220604161913-5712"  ...
	I0604 16:21:13.231357    8236 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:14.362124    8236 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:14.362124    8236 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1306306s)
	W0604 16:21:14.362223    8236 stop.go:75] unable to get state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	W0604 16:21:14.362223    8236 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:14.365284    8236 out.go:177] 
	W0604 16:21:14.367799    8236 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-20220604161913-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-20220604161913-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:21:14.367799    8236 out.go:239] * 
	* 
	W0604 16:21:14.658955    8236 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 16:21:14.662445    8236 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p embed-certs-20220604161913-5712 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220604161913-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220604161913-5712: exit status 1 (1.157174s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (2.9547419s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:21:18.803678    1072 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220604161913-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (27.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220604161933-5712 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context no-preload-20220604161933-5712 create -f testdata\busybox.yaml: exit status 1 (253.9662ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220604161933-5712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context no-preload-20220604161933-5712 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220604161933-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220604161933-5712: exit status 1 (1.1434291s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (2.9765576s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:58.724389    2952 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220604161933-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220604161933-5712: exit status 1 (1.1429828s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (2.872653s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:21:02.743540    7536 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220604161933-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (10.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (2.9473187s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:20:59.229307    6044 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220604161852-5712 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220604161852-5712 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0237911s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220604161852-5712

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220604161852-5712: exit status 1 (1.2233362s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (3.0037681s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:21:06.489349    7504 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220604161852-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (10.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220604161933-5712 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220604161933-5712 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.9522742s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220604161933-5712 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context no-preload-20220604161933-5712 describe deploy/metrics-server -n kube-system: exit status 1 (243.9437ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220604161933-5712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context no-preload-20220604161933-5712 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220604161933-5712: exit status 1 (1.1218495s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (2.9838434s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:21:10.067622    2084 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220604161933-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (7.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (117.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220604161852-5712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20220604161852-5712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: exit status 60 (1m53.3944583s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220604161852-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220604161852-5712 in cluster old-k8s-version-20220604161852-5712
	* Pulling base image ...
	* docker "old-k8s-version-20220604161852-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20220604161852-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:21:06.757399    8536 out.go:296] Setting OutFile to fd 1508 ...
	I0604 16:21:06.818751    8536 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:21:06.818751    8536 out.go:309] Setting ErrFile to fd 1504...
	I0604 16:21:06.818751    8536 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:21:06.829754    8536 out.go:303] Setting JSON to false
	I0604 16:21:06.832743    8536 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10738,"bootTime":1654348928,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:21:06.832743    8536 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:21:06.836820    8536 out.go:177] * [old-k8s-version-20220604161852-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:21:06.839219    8536 notify.go:193] Checking for updates...
	I0604 16:21:06.841372    8536 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:21:06.843671    8536 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:21:06.845811    8536 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:21:06.847768    8536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:21:06.852745    8536 config.go:178] Loaded profile config "old-k8s-version-20220604161852-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0604 16:21:06.858382    8536 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0604 16:21:06.860418    8536 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:21:09.562503    8536 docker.go:137] docker version: linux-20.10.16
	I0604 16:21:09.571554    8536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:21:11.616889    8536 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0453126s)
	I0604 16:21:11.616889    8536 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:21:10.6322445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:21:11.621666    8536 out.go:177] * Using the docker driver based on existing profile
	I0604 16:21:11.624028    8536 start.go:284] selected driver: docker
	I0604 16:21:11.624028    8536 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220604161852-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220604161852-5712 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Sub
net: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:21:11.624258    8536 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:21:11.748027    8536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:21:13.812800    8536 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0645676s)
	I0604 16:21:13.813241    8536 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:21:12.7646921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:21:13.813652    8536 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:21:13.813733    8536 cni.go:95] Creating CNI manager for ""
	I0604 16:21:13.813733    8536 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:21:13.813733    8536 start_flags.go:306] config:
	{Name:old-k8s-version-20220604161852-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220604161852-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:21:13.820112    8536 out.go:177] * Starting control plane node old-k8s-version-20220604161852-5712 in cluster old-k8s-version-20220604161852-5712
	I0604 16:21:13.824133    8536 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:21:13.826184    8536 out.go:177] * Pulling base image ...
	I0604 16:21:13.830244    8536 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0604 16:21:13.830244    8536 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:21:13.830542    8536 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0604 16:21:13.830606    8536 cache.go:57] Caching tarball of preloaded images
	I0604 16:21:13.830764    8536 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:21:13.830764    8536 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0604 16:21:13.831351    8536 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20220604161852-5712\config.json ...
	I0604 16:21:14.936797    8536 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:21:14.936797    8536 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:21:14.936797    8536 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:21:14.936797    8536 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:21:14.936797    8536 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:21:14.936797    8536 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:21:14.936797    8536 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:21:14.936797    8536 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:21:14.936797    8536 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:21:17.283916    8536 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:21:17.283916    8536 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:21:17.283916    8536 start.go:352] acquiring machines lock for old-k8s-version-20220604161852-5712: {Name:mk657bf990f7a9200ffd5262e5ca8011c3561921 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:21:17.284473    8536 start.go:356] acquired machines lock for "old-k8s-version-20220604161852-5712" in 556.7µs
	I0604 16:21:17.284732    8536 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:21:17.284811    8536 fix.go:55] fixHost starting: 
	I0604 16:21:17.302353    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:21:18.361693    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:18.361808    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0592846s)
	I0604 16:21:18.361885    8536 fix.go:103] recreateIfNeeded on old-k8s-version-20220604161852-5712: state= err=unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:18.361885    8536 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:21:18.365373    8536 out.go:177] * docker "old-k8s-version-20220604161852-5712" container is missing, will recreate.
	I0604 16:21:18.367797    8536 delete.go:124] DEMOLISHING old-k8s-version-20220604161852-5712 ...
	I0604 16:21:18.382563    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:21:19.463025    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:19.463025    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0802316s)
	W0604 16:21:19.463202    8536 stop.go:75] unable to get state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:19.463268    8536 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:19.477838    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:21:20.512422    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:20.512422    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0343501s)
	I0604 16:21:20.512562    8536 delete.go:82] Unable to get host status for old-k8s-version-20220604161852-5712, assuming it has already been deleted: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:20.521915    8536 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712
	W0604 16:21:21.594958    8536 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:21:21.594958    8536 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712: (1.0730313s)
	I0604 16:21:21.594958    8536 kic.go:356] could not find the container old-k8s-version-20220604161852-5712 to remove it. will try anyways
	I0604 16:21:21.605084    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:21:22.681363    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:22.681656    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0762675s)
	W0604 16:21:22.681738    8536 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:22.689244    8536 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-20220604161852-5712 /bin/bash -c "sudo init 0"
	W0604 16:21:23.709803    8536 cli_runner.go:211] docker exec --privileged -t old-k8s-version-20220604161852-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:21:23.709803    8536 cli_runner.go:217] Completed: docker exec --privileged -t old-k8s-version-20220604161852-5712 /bin/bash -c "sudo init 0": (1.0205479s)
	I0604 16:21:23.709803    8536 oci.go:625] error shutdown old-k8s-version-20220604161852-5712: docker exec --privileged -t old-k8s-version-20220604161852-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:24.725924    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:21:25.767616    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:25.767616    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0416808s)
	I0604 16:21:25.767616    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:25.767616    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:21:25.767616    8536 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:26.328173    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:21:27.367013    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:27.367013    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0385602s)
	I0604 16:21:27.367013    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:27.367013    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:21:27.367013    8536 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:28.462640    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:21:29.538666    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:29.538771    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0760138s)
	I0604 16:21:29.538826    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:29.538826    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:21:29.538826    8536 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:30.861890    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:21:31.923695    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:31.923842    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.061623s)
	I0604 16:21:31.923842    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:31.923842    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:21:31.923842    8536 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:33.529808    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:21:34.624036    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:34.624036    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0942161s)
	I0604 16:21:34.624036    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:34.624036    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:21:34.624036    8536 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:36.985594    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:21:38.066743    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:38.066780    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0810692s)
	I0604 16:21:38.066861    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:38.066901    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:21:38.066959    8536 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:42.584279    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:21:43.658193    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:43.658193    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.073336s)
	I0604 16:21:43.658193    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:43.658193    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:21:43.658193    8536 oci.go:88] couldn't shut down old-k8s-version-20220604161852-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	 
	I0604 16:21:43.665192    8536 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-20220604161852-5712
	I0604 16:21:44.731320    8536 cli_runner.go:217] Completed: docker rm -f -v old-k8s-version-20220604161852-5712: (1.0660398s)
	I0604 16:21:44.738338    8536 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712
	W0604 16:21:45.775701    8536 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:21:45.775701    8536 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712: (1.0373519s)
	I0604 16:21:45.782702    8536 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:21:46.813671    8536 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:21:46.813671    8536 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0308135s)
	I0604 16:21:46.820687    8536 network_create.go:272] running [docker network inspect old-k8s-version-20220604161852-5712] to gather additional debugging logs...
	I0604 16:21:46.820687    8536 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712
	W0604 16:21:47.906987    8536 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:21:47.906987    8536 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712: (1.086288s)
	I0604 16:21:47.906987    8536 network_create.go:275] error running [docker network inspect old-k8s-version-20220604161852-5712]: docker network inspect old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220604161852-5712
	I0604 16:21:47.906987    8536 network_create.go:277] output of [docker network inspect old-k8s-version-20220604161852-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220604161852-5712
	
	** /stderr **
	W0604 16:21:47.907959    8536 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:21:47.907959    8536 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:21:48.911137    8536 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:21:48.920632    8536 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:21:48.920890    8536 start.go:165] libmachine.API.Create for "old-k8s-version-20220604161852-5712" (driver="docker")
	I0604 16:21:48.920890    8536 client.go:168] LocalClient.Create starting
	I0604 16:21:48.922298    8536 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:21:48.922298    8536 main.go:134] libmachine: Decoding PEM data...
	I0604 16:21:48.922824    8536 main.go:134] libmachine: Parsing certificate...
	I0604 16:21:48.923017    8536 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:21:48.923017    8536 main.go:134] libmachine: Decoding PEM data...
	I0604 16:21:48.923538    8536 main.go:134] libmachine: Parsing certificate...
	I0604 16:21:48.932753    8536 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:21:50.022044    8536 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:21:50.022044    8536 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0892794s)
	I0604 16:21:50.030054    8536 network_create.go:272] running [docker network inspect old-k8s-version-20220604161852-5712] to gather additional debugging logs...
	I0604 16:21:50.030103    8536 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712
	W0604 16:21:51.111247    8536 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:21:51.111316    8536 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712: (1.0809709s)
	I0604 16:21:51.111316    8536 network_create.go:275] error running [docker network inspect old-k8s-version-20220604161852-5712]: docker network inspect old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220604161852-5712
	I0604 16:21:51.111316    8536 network_create.go:277] output of [docker network inspect old-k8s-version-20220604161852-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220604161852-5712
	
	** /stderr **
	I0604 16:21:51.119661    8536 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:21:52.203780    8536 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0841069s)
	I0604 16:21:52.220787    8536 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006100e0] misses:0}
	I0604 16:21:52.220787    8536 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:21:52.220787    8536 network_create.go:115] attempt to create docker network old-k8s-version-20220604161852-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:21:52.226770    8536 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712
	W0604 16:21:53.333249    8536 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:21:53.333348    8536 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: (1.1063424s)
	E0604 16:21:53.333348    8536 network_create.go:104] error while trying to create docker network old-k8s-version-20220604161852-5712 192.168.49.0/24: create docker network old-k8s-version-20220604161852-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dcfe56a47dd6ad3755e17059263013b8b07e02072ac2c56892cb51991d0b48ac (br-dcfe56a47dd6): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:21:53.333348    8536 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220604161852-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dcfe56a47dd6ad3755e17059263013b8b07e02072ac2c56892cb51991d0b48ac (br-dcfe56a47dd6): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220604161852-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dcfe56a47dd6ad3755e17059263013b8b07e02072ac2c56892cb51991d0b48ac (br-dcfe56a47dd6): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:21:53.348268    8536 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:21:54.486886    8536 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1386054s)
	I0604 16:21:54.493877    8536 cli_runner.go:164] Run: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:21:55.546415    8536 cli_runner.go:211] docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:21:55.546415    8536 cli_runner.go:217] Completed: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0525265s)
	I0604 16:21:55.546415    8536 client.go:171] LocalClient.Create took 6.6254524s
	I0604 16:21:57.571055    8536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:21:57.577061    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:21:58.680882    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:21:58.680924    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.1036012s)
	I0604 16:21:58.681141    8536 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:21:58.864477    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:00.002228    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:00.002265    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.1376656s)
	W0604 16:22:00.002543    8536 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:22:00.002543    8536 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:00.015186    8536 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:22:00.021672    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:01.116465    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:01.116546    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0947146s)
	I0604 16:22:01.116595    8536 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:01.326447    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:02.433773    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:02.433892    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.1072654s)
	W0604 16:22:02.434028    8536 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:22:02.434028    8536 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:02.434028    8536 start.go:134] duration metric: createHost completed in 13.5227441s
	I0604 16:22:02.444659    8536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:22:02.450251    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:03.561512    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:03.561512    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.111249s)
	I0604 16:22:03.561512    8536 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:03.909723    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:05.030198    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:05.030198    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.1204028s)
	W0604 16:22:05.030198    8536 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:22:05.030198    8536 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:05.040238    8536 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:22:05.047181    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:06.180697    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:06.180849    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.1332015s)
	I0604 16:22:06.181027    8536 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:06.412269    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:07.475375    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:07.475401    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0628623s)
	W0604 16:22:07.475562    8536 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:22:07.475562    8536 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:07.475562    8536 fix.go:57] fixHost completed within 50.190221s
	I0604 16:22:07.475562    8536 start.go:81] releasing machines lock for "old-k8s-version-20220604161852-5712", held for 50.190445s
	W0604 16:22:07.475562    8536 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	W0604 16:22:07.476109    8536 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	
	I0604 16:22:07.476109    8536 start.go:614] Will try again in 5 seconds ...
	I0604 16:22:12.490543    8536 start.go:352] acquiring machines lock for old-k8s-version-20220604161852-5712: {Name:mk657bf990f7a9200ffd5262e5ca8011c3561921 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:22:12.490543    8536 start.go:356] acquired machines lock for "old-k8s-version-20220604161852-5712" in 0s
	I0604 16:22:12.490543    8536 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:22:12.490543    8536 fix.go:55] fixHost starting: 
	I0604 16:22:12.505630    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:22:13.598908    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:13.598908    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0932656s)
	I0604 16:22:13.598908    8536 fix.go:103] recreateIfNeeded on old-k8s-version-20220604161852-5712: state= err=unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:13.598908    8536 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:22:13.598908    8536 out.go:177] * docker "old-k8s-version-20220604161852-5712" container is missing, will recreate.
	I0604 16:22:13.605858    8536 delete.go:124] DEMOLISHING old-k8s-version-20220604161852-5712 ...
	I0604 16:22:13.619895    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:22:14.687812    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:14.687812    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0679059s)
	W0604 16:22:14.687812    8536 stop.go:75] unable to get state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:14.687812    8536 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:14.702846    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:22:15.771718    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:15.771718    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0688606s)
	I0604 16:22:15.771718    8536 delete.go:82] Unable to get host status for old-k8s-version-20220604161852-5712, assuming it has already been deleted: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:15.778706    8536 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712
	W0604 16:22:16.866402    8536 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:16.866427    8536 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712: (1.0876347s)
	I0604 16:22:16.866427    8536 kic.go:356] could not find the container old-k8s-version-20220604161852-5712 to remove it. will try anyways
	I0604 16:22:16.873800    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:22:17.908396    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:17.908396    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0345845s)
	W0604 16:22:17.908396    8536 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:17.922428    8536 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-20220604161852-5712 /bin/bash -c "sudo init 0"
	W0604 16:22:18.964082    8536 cli_runner.go:211] docker exec --privileged -t old-k8s-version-20220604161852-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:22:18.964082    8536 cli_runner.go:217] Completed: docker exec --privileged -t old-k8s-version-20220604161852-5712 /bin/bash -c "sudo init 0": (1.0416429s)
	I0604 16:22:18.964082    8536 oci.go:625] error shutdown old-k8s-version-20220604161852-5712: docker exec --privileged -t old-k8s-version-20220604161852-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:19.980398    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:22:21.033186    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:21.033186    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0527767s)
	I0604 16:22:21.033186    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:21.033186    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:22:21.033186    8536 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:21.528211    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:22:22.580472    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:22.580640    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0512497s)
	I0604 16:22:22.580781    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:22.580781    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:22:22.580879    8536 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:23.181618    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:22:24.253186    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:24.253186    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.071556s)
	I0604 16:22:24.253186    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:24.253186    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:22:24.253186    8536 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:25.153619    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:22:26.259194    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:26.259324    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.1052749s)
	I0604 16:22:26.259324    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:26.259324    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:22:26.259324    8536 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:28.261991    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:22:29.336488    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:29.336488    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.0744495s)
	I0604 16:22:29.336488    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:29.336488    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:22:29.336488    8536 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:31.177051    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:22:32.226402    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:32.226573    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.049339s)
	I0604 16:22:32.226637    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:32.226637    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:22:32.226637    8536 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:34.918248    8536 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:22:36.036097    8536 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:36.036097    8536 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (1.1171412s)
	I0604 16:22:36.036194    8536 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:36.036194    8536 oci.go:639] temporary error: container old-k8s-version-20220604161852-5712 status is  but expect it to be exited
	I0604 16:22:36.036283    8536 oci.go:88] couldn't shut down old-k8s-version-20220604161852-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	 
	I0604 16:22:36.043699    8536 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-20220604161852-5712
	I0604 16:22:37.128502    8536 cli_runner.go:217] Completed: docker rm -f -v old-k8s-version-20220604161852-5712: (1.0847327s)
	I0604 16:22:37.137300    8536 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712
	W0604 16:22:38.211222    8536 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:38.211222    8536 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220604161852-5712: (1.0739106s)
	I0604 16:22:38.219219    8536 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:22:39.334373    8536 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:22:39.334373    8536 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1151419s)
	I0604 16:22:39.343387    8536 network_create.go:272] running [docker network inspect old-k8s-version-20220604161852-5712] to gather additional debugging logs...
	I0604 16:22:39.343387    8536 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712
	W0604 16:22:40.416349    8536 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:40.416349    8536 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712: (1.0729504s)
	I0604 16:22:40.416349    8536 network_create.go:275] error running [docker network inspect old-k8s-version-20220604161852-5712]: docker network inspect old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220604161852-5712
	I0604 16:22:40.416349    8536 network_create.go:277] output of [docker network inspect old-k8s-version-20220604161852-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220604161852-5712
	
	** /stderr **
	W0604 16:22:40.417473    8536 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:22:40.417473    8536 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:22:41.430640    8536 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:22:41.437074    8536 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:22:41.437074    8536 start.go:165] libmachine.API.Create for "old-k8s-version-20220604161852-5712" (driver="docker")
	I0604 16:22:41.437074    8536 client.go:168] LocalClient.Create starting
	I0604 16:22:41.437876    8536 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:22:41.437876    8536 main.go:134] libmachine: Decoding PEM data...
	I0604 16:22:41.437876    8536 main.go:134] libmachine: Parsing certificate...
	I0604 16:22:41.438548    8536 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:22:41.438548    8536 main.go:134] libmachine: Decoding PEM data...
	I0604 16:22:41.438548    8536 main.go:134] libmachine: Parsing certificate...
	I0604 16:22:41.446811    8536 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:22:42.497600    8536 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:22:42.497682    8536 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0505716s)
	I0604 16:22:42.506394    8536 network_create.go:272] running [docker network inspect old-k8s-version-20220604161852-5712] to gather additional debugging logs...
	I0604 16:22:42.506394    8536 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220604161852-5712
	W0604 16:22:43.671661    8536 cli_runner.go:211] docker network inspect old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:43.671773    8536 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220604161852-5712: (1.1652546s)
	I0604 16:22:43.671773    8536 network_create.go:275] error running [docker network inspect old-k8s-version-20220604161852-5712]: docker network inspect old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220604161852-5712
	I0604 16:22:43.671773    8536 network_create.go:277] output of [docker network inspect old-k8s-version-20220604161852-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220604161852-5712
	
	** /stderr **
	I0604 16:22:43.680985    8536 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:22:44.790216    8536 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1092192s)
	I0604 16:22:44.810124    8536 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006100e0] amended:false}} dirty:map[] misses:0}
	I0604 16:22:44.810124    8536 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:22:44.828463    8536 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006100e0] amended:true}} dirty:map[192.168.49.0:0xc0006100e0 192.168.58.0:0xc000cb61b0] misses:0}
	I0604 16:22:44.828463    8536 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:22:44.828463    8536 network_create.go:115] attempt to create docker network old-k8s-version-20220604161852-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:22:44.841504    8536 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712
	W0604 16:22:45.972101    8536 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:45.972319    8536 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: (1.1305283s)
	E0604 16:22:45.972393    8536 network_create.go:104] error while trying to create docker network old-k8s-version-20220604161852-5712 192.168.58.0/24: create docker network old-k8s-version-20220604161852-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5b23e90cc794b97ffbbad90ac3d4b8defd1358c8cdc1644bef644316c695e3cb (br-5b23e90cc794): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:22:45.972551    8536 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220604161852-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5b23e90cc794b97ffbbad90ac3d4b8defd1358c8cdc1644bef644316c695e3cb (br-5b23e90cc794): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220604161852-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5b23e90cc794b97ffbbad90ac3d4b8defd1358c8cdc1644bef644316c695e3cb (br-5b23e90cc794): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:22:45.986946    8536 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:22:47.064585    8536 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0776264s)
	I0604 16:22:47.071556    8536 cli_runner.go:164] Run: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:22:48.155586    8536 cli_runner.go:211] docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:22:48.155586    8536 cli_runner.go:217] Completed: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0838265s)
	I0604 16:22:48.155676    8536 client.go:171] LocalClient.Create took 6.7185285s
	I0604 16:22:50.167061    8536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:22:50.173878    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:51.200382    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:51.200382    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0264934s)
	I0604 16:22:51.200382    8536 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:51.477997    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:52.505934    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:52.506103    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0277611s)
	W0604 16:22:52.506282    8536 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:22:52.506392    8536 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:52.516938    8536 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:22:52.523060    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:53.571785    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:53.571785    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0486108s)
	I0604 16:22:53.571785    8536 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:53.783004    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:54.831672    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:54.831672    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0486557s)
	W0604 16:22:54.831672    8536 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:22:54.831672    8536 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:54.831672    8536 start.go:134] duration metric: createHost completed in 13.4006173s
	I0604 16:22:54.842312    8536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:22:54.847737    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:55.890692    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:55.890692    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0429008s)
	I0604 16:22:55.890921    8536 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:56.226325    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:57.289652    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:57.289709    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.0630527s)
	W0604 16:22:57.289709    8536 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:22:57.289709    8536 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:57.304141    8536 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:22:57.313140    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:58.410021    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:58.410021    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.096869s)
	I0604 16:22:58.410021    8536 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:58.762718    8536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712
	W0604 16:22:59.870423    8536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712 returned with exit code 1
	I0604 16:22:59.870423    8536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: (1.1075526s)
	W0604 16:22:59.870423    8536 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:22:59.870423    8536 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220604161852-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220604161852-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	I0604 16:22:59.870423    8536 fix.go:57] fixHost completed within 47.3793639s
	I0604 16:22:59.870423    8536 start.go:81] releasing machines lock for "old-k8s-version-20220604161852-5712", held for 47.3793639s
	W0604 16:22:59.871203    8536 out.go:239] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20220604161852-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20220604161852-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	
	I0604 16:22:59.876510    8536 out.go:177] 
	W0604 16:22:59.878598    8536 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220604161852-5712 container: docker volume create old-k8s-version-20220604161852-5712 --label name.minikube.sigs.k8s.io=old-k8s-version-20220604161852-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220604161852-5712: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220604161852-5712': mkdir /var/lib/docker/volumes/old-k8s-version-20220604161852-5712: read-only file system
	
	W0604 16:22:59.878598    8536 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:22:59.878598    8536 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:22:59.882778    8536 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20220604161852-5712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220604161852-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220604161852-5712: exit status 1 (1.1327322s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (3.0014644s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:04.234034    6008 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220604161852-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (117.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20220604161933-5712 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p no-preload-20220604161933-5712 --alsologtostderr -v=3: exit status 82 (22.8371166s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-20220604161933-5712"  ...
	* Stopping node "no-preload-20220604161933-5712"  ...
	* Stopping node "no-preload-20220604161933-5712"  ...
	* Stopping node "no-preload-20220604161933-5712"  ...
	* Stopping node "no-preload-20220604161933-5712"  ...
	* Stopping node "no-preload-20220604161933-5712"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:21:10.342963    6040 out.go:296] Setting OutFile to fd 2040 ...
	I0604 16:21:10.407973    6040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:21:10.407973    6040 out.go:309] Setting ErrFile to fd 2000...
	I0604 16:21:10.407973    6040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:21:10.419965    6040 out.go:303] Setting JSON to false
	I0604 16:21:10.420970    6040 daemonize_windows.go:44] trying to kill existing schedule stop for profile no-preload-20220604161933-5712...
	I0604 16:21:10.430965    6040 ssh_runner.go:195] Run: systemctl --version
	I0604 16:21:10.436982    6040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:21:12.971261    6040 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:21:12.971261    6040 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (2.5342515s)
	I0604 16:21:12.982265    6040 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0604 16:21:12.989264    6040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:21:14.123253    6040 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:21:14.123253    6040 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.133977s)
	I0604 16:21:14.123253    6040 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:14.500652    6040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:21:15.583658    6040 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:21:15.583658    6040 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0829277s)
	I0604 16:21:15.583785    6040 openrc.go:165] stop output: 
	E0604 16:21:15.583785    6040 daemonize_windows.go:38] error terminating scheduled stop for profile no-preload-20220604161933-5712: stopping schedule-stop service for profile no-preload-20220604161933-5712: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:15.583785    6040 mustload.go:65] Loading cluster: no-preload-20220604161933-5712
	I0604 16:21:15.584385    6040 config.go:178] Loaded profile config "no-preload-20220604161933-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:21:15.584988    6040 stop.go:39] StopHost: no-preload-20220604161933-5712
	I0604 16:21:15.588399    6040 out.go:177] * Stopping node "no-preload-20220604161933-5712"  ...
	I0604 16:21:15.605130    6040 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:21:16.693933    6040 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:16.693933    6040 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.087597s)
	W0604 16:21:16.694028    6040 stop.go:75] unable to get state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	W0604 16:21:16.694050    6040 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:16.694050    6040 retry.go:31] will retry after 937.714187ms: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:17.631980    6040 stop.go:39] StopHost: no-preload-20220604161933-5712
	I0604 16:21:17.637326    6040 out.go:177] * Stopping node "no-preload-20220604161933-5712"  ...
	I0604 16:21:17.652230    6040 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:21:18.834512    6040 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:18.834512    6040 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.1822692s)
	W0604 16:21:18.834512    6040 stop.go:75] unable to get state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	W0604 16:21:18.834512    6040 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:18.834512    6040 retry.go:31] will retry after 1.386956246s: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:20.229893    6040 stop.go:39] StopHost: no-preload-20220604161933-5712
	I0604 16:21:20.234728    6040 out.go:177] * Stopping node "no-preload-20220604161933-5712"  ...
	I0604 16:21:20.250091    6040 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:21:21.331252    6040 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:21.331252    6040 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0809964s)
	W0604 16:21:21.331252    6040 stop.go:75] unable to get state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	W0604 16:21:21.331252    6040 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:21.331252    6040 retry.go:31] will retry after 2.670351914s: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:24.008932    6040 stop.go:39] StopHost: no-preload-20220604161933-5712
	I0604 16:21:24.014793    6040 out.go:177] * Stopping node "no-preload-20220604161933-5712"  ...
	I0604 16:21:24.029279    6040 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:21:25.118075    6040 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:25.118130    6040 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0886891s)
	W0604 16:21:25.118230    6040 stop.go:75] unable to get state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	W0604 16:21:25.118280    6040 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:25.118324    6040 retry.go:31] will retry after 1.909024939s: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:27.036197    6040 stop.go:39] StopHost: no-preload-20220604161933-5712
	I0604 16:21:27.040189    6040 out.go:177] * Stopping node "no-preload-20220604161933-5712"  ...
	I0604 16:21:27.057521    6040 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:21:28.122532    6040 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:28.122532    6040 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0649986s)
	W0604 16:21:28.122532    6040 stop.go:75] unable to get state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	W0604 16:21:28.122532    6040 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:28.122532    6040 retry.go:31] will retry after 3.323628727s: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:31.458048    6040 stop.go:39] StopHost: no-preload-20220604161933-5712
	I0604 16:21:31.464677    6040 out.go:177] * Stopping node "no-preload-20220604161933-5712"  ...
	I0604 16:21:31.482780    6040 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:21:32.579458    6040 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:32.579543    6040 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0964604s)
	W0604 16:21:32.579690    6040 stop.go:75] unable to get state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	W0604 16:21:32.579690    6040 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:32.583575    6040 out.go:177] 
	W0604 16:21:32.585999    6040 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-20220604161933-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-20220604161933-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:21:32.585999    6040 out.go:239] * 
	* 
	W0604 16:21:32.900355    6040 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 16:21:32.903967    6040 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p no-preload-20220604161933-5712 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220604161933-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220604161933-5712: exit status 1 (1.140748s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (3.0083441s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:21:37.072097    7912 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220604161933-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (27.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (2.8996186s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:21:21.703162    8236 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220604161913-5712 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220604161913-5712 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.9212045s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220604161913-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220604161913-5712: exit status 1 (1.1512208s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (2.8846086s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:21:28.669213    1684 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220604161913-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (9.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (118.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220604161913-5712 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p embed-certs-20220604161913-5712 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m54.1392879s)

                                                
                                                
-- stdout --
	* [embed-certs-20220604161913-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node embed-certs-20220604161913-5712 in cluster embed-certs-20220604161913-5712
	* Pulling base image ...
	* docker "embed-certs-20220604161913-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-20220604161913-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:21:28.920941    7756 out.go:296] Setting OutFile to fd 2036 ...
	I0604 16:21:28.976500    7756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:21:28.976500    7756 out.go:309] Setting ErrFile to fd 1544...
	I0604 16:21:28.976500    7756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:21:28.987159    7756 out.go:303] Setting JSON to false
	I0604 16:21:28.988807    7756 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10761,"bootTime":1654348927,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:21:28.989837    7756 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:21:28.994756    7756 out.go:177] * [embed-certs-20220604161913-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:21:28.997859    7756 notify.go:193] Checking for updates...
	I0604 16:21:29.002830    7756 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:21:29.004851    7756 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:21:29.006759    7756 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:21:29.009758    7756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:21:29.012858    7756 config.go:178] Loaded profile config "embed-certs-20220604161913-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:21:29.014013    7756 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:21:31.645783    7756 docker.go:137] docker version: linux-20.10.16
	I0604 16:21:31.653806    7756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:21:33.735573    7756 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0816189s)
	I0604 16:21:33.736153    7756 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:21:32.6960614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:21:33.740353    7756 out.go:177] * Using the docker driver based on existing profile
	I0604 16:21:33.742234    7756 start.go:284] selected driver: docker
	I0604 16:21:33.742825    7756 start.go:806] validating driver "docker" against &{Name:embed-certs-20220604161913-5712 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220604161913-5712 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:21:33.743013    7756 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:21:33.814895    7756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:21:35.901910    7756 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0869918s)
	I0604 16:21:35.901959    7756 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:21:34.8840872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:21:35.902616    7756 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:21:35.902651    7756 cni.go:95] Creating CNI manager for ""
	I0604 16:21:35.902651    7756 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:21:35.902651    7756 start_flags.go:306] config:
	{Name:embed-certs-20220604161913-5712 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220604161913-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:21:35.908827    7756 out.go:177] * Starting control plane node embed-certs-20220604161913-5712 in cluster embed-certs-20220604161913-5712
	I0604 16:21:35.911208    7756 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:21:35.914018    7756 out.go:177] * Pulling base image ...
	I0604 16:21:35.917815    7756 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:21:35.917815    7756 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:21:35.917815    7756 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:21:35.917815    7756 cache.go:57] Caching tarball of preloaded images
	I0604 16:21:35.918426    7756 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:21:35.918478    7756 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:21:35.918478    7756 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20220604161913-5712\config.json ...
	I0604 16:21:37.008045    7756 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:21:37.008045    7756 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:21:37.008045    7756 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:21:37.008045    7756 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:21:37.008045    7756 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:21:37.008045    7756 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:21:37.008045    7756 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:21:37.008045    7756 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:21:37.008045    7756 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:21:39.341743    7756 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:21:39.341743    7756 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:21:39.341912    7756 start.go:352] acquiring machines lock for embed-certs-20220604161913-5712: {Name:mkcc405ffbb18d72833c60c092ab314d3a46ad85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:21:39.341944    7756 start.go:356] acquired machines lock for "embed-certs-20220604161913-5712" in 0s
	I0604 16:21:39.341944    7756 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:21:39.341944    7756 fix.go:55] fixHost starting: 
	I0604 16:21:39.359767    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:40.416118    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:40.416190    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0550568s)
	I0604 16:21:40.416227    7756 fix.go:103] recreateIfNeeded on embed-certs-20220604161913-5712: state= err=unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:40.416311    7756 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:21:40.420428    7756 out.go:177] * docker "embed-certs-20220604161913-5712" container is missing, will recreate.
	I0604 16:21:40.422713    7756 delete.go:124] DEMOLISHING embed-certs-20220604161913-5712 ...
	I0604 16:21:40.437882    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:41.484120    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:41.484174    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0460768s)
	W0604 16:21:41.484174    7756 stop.go:75] unable to get state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:41.484174    7756 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:41.499163    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:42.530766    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:42.530766    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0315922s)
	I0604 16:21:42.530766    7756 delete.go:82] Unable to get host status for embed-certs-20220604161913-5712, assuming it has already been deleted: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:42.536752    7756 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220604161913-5712
	W0604 16:21:43.610604    7756 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:21:43.610604    7756 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220604161913-5712: (1.07384s)
	I0604 16:21:43.610604    7756 kic.go:356] could not find the container embed-certs-20220604161913-5712 to remove it. will try anyways
	I0604 16:21:43.618602    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:44.715729    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:44.715729    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0969968s)
	W0604 16:21:44.715729    7756 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:44.725879    7756 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-20220604161913-5712 /bin/bash -c "sudo init 0"
	W0604 16:21:45.791701    7756 cli_runner.go:211] docker exec --privileged -t embed-certs-20220604161913-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:21:45.791701    7756 cli_runner.go:217] Completed: docker exec --privileged -t embed-certs-20220604161913-5712 /bin/bash -c "sudo init 0": (1.0658107s)
	I0604 16:21:45.791701    7756 oci.go:625] error shutdown embed-certs-20220604161913-5712: docker exec --privileged -t embed-certs-20220604161913-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:46.808989    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:47.891574    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:47.891574    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0825727s)
	I0604 16:21:47.891574    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:47.891574    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:21:47.891574    7756 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:48.461086    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:49.541932    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:49.541932    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0808341s)
	I0604 16:21:49.541932    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:49.541932    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:21:49.541932    7756 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:50.629911    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:51.740100    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:51.740100    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1100543s)
	I0604 16:21:51.740100    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:51.740100    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:21:51.740100    7756 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:53.061120    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:54.158776    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:54.158776    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0976448s)
	I0604 16:21:54.158776    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:54.158776    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:21:54.158776    7756 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:55.760583    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:21:56.883382    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:56.883382    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1227868s)
	I0604 16:21:56.883382    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:56.883382    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:21:56.883382    7756 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:21:59.240350    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:00.333014    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:00.333014    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0926518s)
	I0604 16:22:00.333014    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:00.333014    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:22:00.333014    7756 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:04.851606    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:05.962473    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:05.962473    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.110855s)
	I0604 16:22:05.962473    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:05.962473    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:22:05.962473    7756 oci.go:88] couldn't shut down embed-certs-20220604161913-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	 
	I0604 16:22:05.969445    7756 cli_runner.go:164] Run: docker rm -f -v embed-certs-20220604161913-5712
	I0604 16:22:07.082649    7756 cli_runner.go:217] Completed: docker rm -f -v embed-certs-20220604161913-5712: (1.1131922s)
	I0604 16:22:07.090195    7756 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220604161913-5712
	W0604 16:22:08.168820    7756 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:08.168927    7756 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220604161913-5712: (1.0784323s)
	I0604 16:22:08.184892    7756 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:22:09.224390    7756 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:22:09.224390    7756 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0394867s)
	I0604 16:22:09.231390    7756 network_create.go:272] running [docker network inspect embed-certs-20220604161913-5712] to gather additional debugging logs...
	I0604 16:22:09.231390    7756 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712
	W0604 16:22:10.282605    7756 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:10.282605    7756 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712: (1.051204s)
	I0604 16:22:10.282605    7756 network_create.go:275] error running [docker network inspect embed-certs-20220604161913-5712]: docker network inspect embed-certs-20220604161913-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220604161913-5712
	I0604 16:22:10.282605    7756 network_create.go:277] output of [docker network inspect embed-certs-20220604161913-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220604161913-5712
	
	** /stderr **
	W0604 16:22:10.283352    7756 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:22:10.283352    7756 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:22:11.285023    7756 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:22:11.294015    7756 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:22:11.295041    7756 start.go:165] libmachine.API.Create for "embed-certs-20220604161913-5712" (driver="docker")
	I0604 16:22:11.295041    7756 client.go:168] LocalClient.Create starting
	I0604 16:22:11.295900    7756 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:22:11.296159    7756 main.go:134] libmachine: Decoding PEM data...
	I0604 16:22:11.296236    7756 main.go:134] libmachine: Parsing certificate...
	I0604 16:22:11.296693    7756 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:22:11.296924    7756 main.go:134] libmachine: Decoding PEM data...
	I0604 16:22:11.296948    7756 main.go:134] libmachine: Parsing certificate...
	I0604 16:22:11.306032    7756 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:22:12.382463    7756 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:22:12.382463    7756 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0754176s)
	I0604 16:22:12.389491    7756 network_create.go:272] running [docker network inspect embed-certs-20220604161913-5712] to gather additional debugging logs...
	I0604 16:22:12.389491    7756 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712
	W0604 16:22:13.458482    7756 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:13.458482    7756 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712: (1.0689795s)
	I0604 16:22:13.458658    7756 network_create.go:275] error running [docker network inspect embed-certs-20220604161913-5712]: docker network inspect embed-certs-20220604161913-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220604161913-5712
	I0604 16:22:13.458701    7756 network_create.go:277] output of [docker network inspect embed-certs-20220604161913-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220604161913-5712
	
	** /stderr **
	I0604 16:22:13.468408    7756 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:22:14.591885    7756 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1234645s)
	I0604 16:22:14.608813    7756 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0004025d8] misses:0}
	I0604 16:22:14.608813    7756 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:22:14.608813    7756 network_create.go:115] attempt to create docker network embed-certs-20220604161913-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:22:14.615846    7756 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712
	W0604 16:22:15.739708    7756 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:15.739708    7756 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: (1.12385s)
	E0604 16:22:15.739708    7756 network_create.go:104] error while trying to create docker network embed-certs-20220604161913-5712 192.168.49.0/24: create docker network embed-certs-20220604161913-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5551d1354a73f64649732cbd81d8b3783d278700dc8a0c959061660b4df507ba (br-5551d1354a73): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:22:15.739708    7756 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220604161913-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5551d1354a73f64649732cbd81d8b3783d278700dc8a0c959061660b4df507ba (br-5551d1354a73): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220604161913-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5551d1354a73f64649732cbd81d8b3783d278700dc8a0c959061660b4df507ba (br-5551d1354a73): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:22:15.753706    7756 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:22:16.850180    7756 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0964625s)
	I0604 16:22:16.858630    7756 cli_runner.go:164] Run: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:22:17.939937    7756 cli_runner.go:211] docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:22:17.939937    7756 cli_runner.go:217] Completed: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0812956s)
	I0604 16:22:17.939937    7756 client.go:171] LocalClient.Create took 6.644824s
	I0604 16:22:19.953422    7756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:22:19.963099    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:22:21.049369    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:21.049486    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0861897s)
	I0604 16:22:21.049486    7756 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:21.228164    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:22:22.314682    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:22.314682    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0863673s)
	W0604 16:22:22.314682    7756 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:22:22.314682    7756 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:22.325466    7756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:22:22.330992    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:22:23.359696    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:23.359696    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0286935s)
	I0604 16:22:23.359696    7756 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:23.571228    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:22:24.643157    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:24.643200    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0716836s)
	W0604 16:22:24.643200    7756 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:22:24.643200    7756 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:24.643200    7756 start.go:134] duration metric: createHost completed in 13.3580314s
	I0604 16:22:24.653739    7756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:22:24.660690    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:22:25.720381    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:25.720381    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0595633s)
	I0604 16:22:25.720662    7756 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:26.064309    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:22:27.156515    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:27.156515    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0921935s)
	W0604 16:22:27.156515    7756 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:22:27.156515    7756 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:27.165503    7756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:22:27.172499    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:22:28.238111    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:28.238111    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.065601s)
	I0604 16:22:28.238111    7756 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:28.481743    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:22:29.587616    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:29.587850    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.1058604s)
	W0604 16:22:29.588090    7756 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:22:29.588143    7756 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:29.588143    7756 fix.go:57] fixHost completed within 50.2456511s
	I0604 16:22:29.588143    7756 start.go:81] releasing machines lock for "embed-certs-20220604161913-5712", held for 50.2456511s
	W0604 16:22:29.588143    7756 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	W0604 16:22:29.588143    7756 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	
	I0604 16:22:29.588143    7756 start.go:614] Will try again in 5 seconds ...
	I0604 16:22:34.591867    7756 start.go:352] acquiring machines lock for embed-certs-20220604161913-5712: {Name:mkcc405ffbb18d72833c60c092ab314d3a46ad85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:22:34.591867    7756 start.go:356] acquired machines lock for "embed-certs-20220604161913-5712" in 0s
	I0604 16:22:34.591867    7756 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:22:34.592403    7756 fix.go:55] fixHost starting: 
	I0604 16:22:34.606854    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:35.667170    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:35.667170    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0603045s)
	I0604 16:22:35.667170    7756 fix.go:103] recreateIfNeeded on embed-certs-20220604161913-5712: state= err=unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:35.667170    7756 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:22:35.672175    7756 out.go:177] * docker "embed-certs-20220604161913-5712" container is missing, will recreate.
	I0604 16:22:35.674167    7756 delete.go:124] DEMOLISHING embed-certs-20220604161913-5712 ...
	I0604 16:22:35.686171    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:36.832066    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:36.832066    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1458828s)
	W0604 16:22:36.832066    7756 stop.go:75] unable to get state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:36.832066    7756 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:36.847061    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:37.960487    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:37.960487    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1132614s)
	I0604 16:22:37.960659    7756 delete.go:82] Unable to get host status for embed-certs-20220604161913-5712, assuming it has already been deleted: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:37.967393    7756 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220604161913-5712
	W0604 16:22:39.135023    7756 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:22:39.135023    7756 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220604161913-5712: (1.167617s)
	I0604 16:22:39.135023    7756 kic.go:356] could not find the container embed-certs-20220604161913-5712 to remove it. will try anyways
	I0604 16:22:39.144126    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:40.292832    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:40.293109    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.148693s)
	W0604 16:22:40.293188    7756 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:40.300571    7756 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-20220604161913-5712 /bin/bash -c "sudo init 0"
	W0604 16:22:41.366146    7756 cli_runner.go:211] docker exec --privileged -t embed-certs-20220604161913-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:22:41.366283    7756 cli_runner.go:217] Completed: docker exec --privileged -t embed-certs-20220604161913-5712 /bin/bash -c "sudo init 0": (1.0655636s)
	I0604 16:22:41.366283    7756 oci.go:625] error shutdown embed-certs-20220604161913-5712: docker exec --privileged -t embed-certs-20220604161913-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:42.381511    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:43.500409    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:43.500602    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1188859s)
	I0604 16:22:43.500602    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:43.500602    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:22:43.500602    7756 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:44.001847    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:45.069645    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:45.069645    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0677866s)
	I0604 16:22:45.069645    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:45.069645    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:22:45.069645    7756 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:45.678333    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:46.782288    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:46.782288    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1036976s)
	I0604 16:22:46.782458    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:46.782458    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:22:46.782529    7756 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:47.692163    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:48.767559    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:48.767559    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0753837s)
	I0604 16:22:48.767559    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:48.767559    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:22:48.767559    7756 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:50.780614    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:51.833823    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:51.833823    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0531972s)
	I0604 16:22:51.833823    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:51.833823    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:22:51.833823    7756 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:53.675034    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:54.783798    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:54.783798    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.1087521s)
	I0604 16:22:54.783798    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:54.783798    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:22:54.783798    7756 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:57.469904    7756 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:22:58.565565    7756 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:58.565565    7756 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (1.0956493s)
	I0604 16:22:58.565565    7756 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:22:58.565565    7756 oci.go:639] temporary error: container embed-certs-20220604161913-5712 status is  but expect it to be exited
	I0604 16:22:58.565565    7756 oci.go:88] couldn't shut down embed-certs-20220604161913-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	 
	I0604 16:22:58.575239    7756 cli_runner.go:164] Run: docker rm -f -v embed-certs-20220604161913-5712
	I0604 16:22:59.666268    7756 cli_runner.go:217] Completed: docker rm -f -v embed-certs-20220604161913-5712: (1.0910164s)
	I0604 16:22:59.673814    7756 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220604161913-5712
	W0604 16:23:00.776209    7756 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:23:00.776209    7756 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220604161913-5712: (1.1023827s)
	I0604 16:23:00.785563    7756 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:23:01.861342    7756 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:23:01.861342    7756 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0757134s)
	I0604 16:23:01.868959    7756 network_create.go:272] running [docker network inspect embed-certs-20220604161913-5712] to gather additional debugging logs...
	I0604 16:23:01.868959    7756 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712
	W0604 16:23:02.946375    7756 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:23:02.946375    7756 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712: (1.0773342s)
	I0604 16:23:02.946447    7756 network_create.go:275] error running [docker network inspect embed-certs-20220604161913-5712]: docker network inspect embed-certs-20220604161913-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220604161913-5712
	I0604 16:23:02.946447    7756 network_create.go:277] output of [docker network inspect embed-certs-20220604161913-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220604161913-5712
	
	** /stderr **
	W0604 16:23:02.947322    7756 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:23:02.947322    7756 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:23:03.953240    7756 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:23:03.957359    7756 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:23:03.957563    7756 start.go:165] libmachine.API.Create for "embed-certs-20220604161913-5712" (driver="docker")
	I0604 16:23:03.957563    7756 client.go:168] LocalClient.Create starting
	I0604 16:23:03.958093    7756 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:23:03.958329    7756 main.go:134] libmachine: Decoding PEM data...
	I0604 16:23:03.958403    7756 main.go:134] libmachine: Parsing certificate...
	I0604 16:23:03.958542    7756 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:23:03.958542    7756 main.go:134] libmachine: Decoding PEM data...
	I0604 16:23:03.958542    7756 main.go:134] libmachine: Parsing certificate...
	I0604 16:23:03.968010    7756 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:23:05.070504    7756 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:23:05.070572    7756 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1024115s)
	I0604 16:23:05.077248    7756 network_create.go:272] running [docker network inspect embed-certs-20220604161913-5712] to gather additional debugging logs...
	I0604 16:23:05.077248    7756 cli_runner.go:164] Run: docker network inspect embed-certs-20220604161913-5712
	W0604 16:23:06.207819    7756 cli_runner.go:211] docker network inspect embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:23:06.207877    7756 cli_runner.go:217] Completed: docker network inspect embed-certs-20220604161913-5712: (1.130443s)
	I0604 16:23:06.207877    7756 network_create.go:275] error running [docker network inspect embed-certs-20220604161913-5712]: docker network inspect embed-certs-20220604161913-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220604161913-5712
	I0604 16:23:06.207877    7756 network_create.go:277] output of [docker network inspect embed-certs-20220604161913-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220604161913-5712
	
	** /stderr **
	I0604 16:23:06.215547    7756 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:23:07.300849    7756 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0852278s)
	I0604 16:23:07.320767    7756 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004025d8] amended:false}} dirty:map[] misses:0}
	I0604 16:23:07.320767    7756 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:23:07.345717    7756 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004025d8] amended:true}} dirty:map[192.168.49.0:0xc0004025d8 192.168.58.0:0xc00047c1f8] misses:0}
	I0604 16:23:07.345809    7756 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:23:07.345809    7756 network_create.go:115] attempt to create docker network embed-certs-20220604161913-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:23:07.357815    7756 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712
	W0604 16:23:08.434269    7756 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:23:08.434269    7756 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: (1.0758036s)
	E0604 16:23:08.434269    7756 network_create.go:104] error while trying to create docker network embed-certs-20220604161913-5712 192.168.58.0/24: create docker network embed-certs-20220604161913-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network df9e889c2bfb73b549f12f75f8b00dfd41f53815701c725cbd70522ef640acf0 (br-df9e889c2bfb): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:23:08.434269    7756 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220604161913-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network df9e889c2bfb73b549f12f75f8b00dfd41f53815701c725cbd70522ef640acf0 (br-df9e889c2bfb): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220604161913-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network df9e889c2bfb73b549f12f75f8b00dfd41f53815701c725cbd70522ef640acf0 (br-df9e889c2bfb): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:23:08.447047    7756 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:23:09.587595    7756 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1395362s)
	I0604 16:23:09.594845    7756 cli_runner.go:164] Run: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:23:10.667041    7756 cli_runner.go:211] docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:23:10.667041    7756 cli_runner.go:217] Completed: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0720696s)
	I0604 16:23:10.667041    7756 client.go:171] LocalClient.Create took 6.7094041s
	I0604 16:23:12.699105    7756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:23:12.709105    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:23:13.784549    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:23:13.784612    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0753002s)
	I0604 16:23:13.784612    7756 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:23:14.061612    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:23:15.140014    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:23:15.140014    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.07828s)
	W0604 16:23:15.140014    7756 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:23:15.140014    7756 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:23:15.149659    7756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:23:15.156153    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:23:16.268341    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:23:16.268341    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.1121763s)
	I0604 16:23:16.268341    7756 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:23:16.481063    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:23:17.610427    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:23:17.610427    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.1292934s)
	W0604 16:23:17.610427    7756 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:23:17.610427    7756 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:23:17.610427    7756 start.go:134] duration metric: createHost completed in 13.6567857s
	I0604 16:23:17.621408    7756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:23:17.627763    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:23:18.727677    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:23:18.727732    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0997485s)
	I0604 16:23:18.727732    7756 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:23:19.057860    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:23:20.146544    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:23:20.146544    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.0885072s)
	W0604 16:23:20.146544    7756 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:23:20.146544    7756 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:23:20.157531    7756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:23:20.163763    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:23:21.277157    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:23:21.277387    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.1133822s)
	I0604 16:23:21.277492    7756 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:23:21.636590    7756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712
	W0604 16:23:22.777838    7756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712 returned with exit code 1
	I0604 16:23:22.777986    7756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: (1.1410942s)
	W0604 16:23:22.777986    7756 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:23:22.777986    7756 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220604161913-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220604161913-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	I0604 16:23:22.777986    7756 fix.go:57] fixHost completed within 48.1850575s
	I0604 16:23:22.777986    7756 start.go:81] releasing machines lock for "embed-certs-20220604161913-5712", held for 48.1855934s
	W0604 16:23:22.778582    7756 out.go:239] * Failed to start docker container. Running "minikube delete -p embed-certs-20220604161913-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p embed-certs-20220604161913-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	
	I0604 16:23:22.783952    7756 out.go:177] 
	W0604 16:23:22.787871    7756 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220604161913-5712 container: docker volume create embed-certs-20220604161913-5712 --label name.minikube.sigs.k8s.io=embed-certs-20220604161913-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220604161913-5712: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220604161913-5712': mkdir /var/lib/docker/volumes/embed-certs-20220604161913-5712: read-only file system
	
	W0604 16:23:22.788549    7756 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:23:22.788616    7756 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:23:22.794579    7756 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p embed-certs-20220604161913-5712 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220604161913-5712: exit status 1 (1.1841176s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (2.9651478s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:27.205839    4592 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220604161913-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (118.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (2.9002513s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:21:39.972999    5584 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220604161933-5712 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220604161933-5712 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.850767s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220604161933-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220604161933-5712: exit status 1 (1.1406082s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (2.9673911s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:21:46.952991    4592 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220604161933-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (9.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (118.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220604161933-5712 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20220604161933-5712 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m54.0950043s)

                                                
                                                
-- stdout --
	* [no-preload-20220604161933-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-20220604161933-5712 in cluster no-preload-20220604161933-5712
	* Pulling base image ...
	* docker "no-preload-20220604161933-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-20220604161933-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:21:47.198298    5716 out.go:296] Setting OutFile to fd 1608 ...
	I0604 16:21:47.258374    5716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:21:47.258374    5716 out.go:309] Setting ErrFile to fd 1928...
	I0604 16:21:47.258374    5716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:21:47.270121    5716 out.go:303] Setting JSON to false
	I0604 16:21:47.272267    5716 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10779,"bootTime":1654348928,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:21:47.272267    5716 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:21:47.276211    5716 out.go:177] * [no-preload-20220604161933-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:21:47.279018    5716 notify.go:193] Checking for updates...
	I0604 16:21:47.281745    5716 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:21:47.283995    5716 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:21:47.285977    5716 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:21:47.288334    5716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:21:47.291531    5716 config.go:178] Loaded profile config "no-preload-20220604161933-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:21:47.292411    5716 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:21:49.943466    5716 docker.go:137] docker version: linux-20.10.16
	I0604 16:21:49.954536    5716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:21:52.001886    5716 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0473284s)
	I0604 16:21:52.002581    5716 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:21:51.0124641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:21:52.007052    5716 out.go:177] * Using the docker driver based on existing profile
	I0604 16:21:52.009495    5716 start.go:284] selected driver: docker
	I0604 16:21:52.009495    5716 start.go:806] validating driver "docker" against &{Name:no-preload-20220604161933-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220604161933-5712 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:21:52.009722    5716 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:21:52.075371    5716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:21:54.142770    5716 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0673764s)
	I0604 16:21:54.142770    5716 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:21:53.1181161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:21:54.142770    5716 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:21:54.142770    5716 cni.go:95] Creating CNI manager for ""
	I0604 16:21:54.142770    5716 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:21:54.142770    5716 start_flags.go:306] config:
	{Name:no-preload-20220604161933-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220604161933-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:21:54.145776    5716 out.go:177] * Starting control plane node no-preload-20220604161933-5712 in cluster no-preload-20220604161933-5712
	I0604 16:21:54.148766    5716 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:21:54.151766    5716 out.go:177] * Pulling base image ...
	I0604 16:21:54.154765    5716 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:21:54.154765    5716 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:21:54.155765    5716 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20220604161933-5712\config.json ...
	I0604 16:21:54.155765    5716 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0604 16:21:54.155765    5716 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6
	I0604 16:21:54.155765    5716 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6
	I0604 16:21:54.155765    5716 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6
	I0604 16:21:54.155765    5716 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6
	I0604 16:21:54.155765    5716 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6
	I0604 16:21:54.155765    5716 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.5.1-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0
	I0604 16:21:54.155765    5716 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns:v1.8.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6
	I0604 16:21:54.330235    5716 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:21:54.330235    5716 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0604 16:21:54.330764    5716 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 174.9964ms
	I0604 16:21:54.330876    5716 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0604 16:21:54.331643    5716 cache.go:107] acquiring lock: {Name:mk9255ee8c390126b963cceac501a1fcc40ecb6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:21:54.332245    5716 cache.go:107] acquiring lock: {Name:mk90a34f529b9ea089d74e18a271c58e34606f29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:21:54.332245    5716 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6 exists
	I0604 16:21:54.332245    5716 cache.go:107] acquiring lock: {Name:mka0a7f9fce0e132e7529c42bed359c919fc231b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:21:54.332245    5716 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.23.6" took 176.4776ms
	I0604 16:21:54.332245    5716 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6 succeeded
	I0604 16:21:54.332245    5716 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6 exists
	I0604 16:21:54.332245    5716 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6 exists
	I0604 16:21:54.332245    5716 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.23.6" took 176.4776ms
	I0604 16:21:54.332245    5716 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6 succeeded
	I0604 16:21:54.332771    5716 cache.go:107] acquiring lock: {Name:mk1cf2f2eee53b81f1c95945c2dd3783d0c7d992 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:21:54.332845    5716 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns\\coredns_v1.8.6" took 177.0773ms
	I0604 16:21:54.332845    5716 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6 succeeded
	I0604 16:21:54.332845    5716 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6 exists
	I0604 16:21:54.332845    5716 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.23.6" took 177.0773ms
	I0604 16:21:54.332845    5716 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6 succeeded
	I0604 16:21:54.346250    5716 cache.go:107] acquiring lock: {Name:mk3772b9dcb36c3cbc3aa4dfbe66c5266092e2c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:21:54.346250    5716 cache.go:107] acquiring lock: {Name:mkb7d2f7b32c5276784ba454e50c746d7fc6c05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:21:54.346250    5716 cache.go:107] acquiring lock: {Name:mk40b809628c4e9673e2a41bf9fb31b8a6b3529d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:21:54.346250    5716 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6 exists
	I0604 16:21:54.346250    5716 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0 exists
	I0604 16:21:54.346250    5716 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.6" took 190.4828ms
	I0604 16:21:54.346250    5716 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6 succeeded
	I0604 16:21:54.346250    5716 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6 exists
	I0604 16:21:54.346250    5716 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.5.1-0" took 190.4828ms
	I0604 16:21:54.346250    5716 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0 succeeded
	I0604 16:21:54.346778    5716 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.23.6" took 191.0101ms
	I0604 16:21:54.346778    5716 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6 succeeded
	I0604 16:21:54.346845    5716 cache.go:87] Successfully saved all images to host disk.
	I0604 16:21:55.262623    5716 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:21:55.262623    5716 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:21:55.262623    5716 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:21:55.262623    5716 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:21:55.262623    5716 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:21:55.263141    5716 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:21:55.263202    5716 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:21:55.263202    5716 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:21:55.263202    5716 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:21:57.671642    5716 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:21:57.671739    5716 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:21:57.671778    5716 start.go:352] acquiring machines lock for no-preload-20220604161933-5712: {Name:mkb9157c767b2183b064e561f5ba73bb0b5648b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:21:57.671778    5716 start.go:356] acquired machines lock for "no-preload-20220604161933-5712" in 0s
	I0604 16:21:57.671778    5716 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:21:57.672319    5716 fix.go:55] fixHost starting: 
	I0604 16:21:57.687723    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:21:58.757802    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:58.758064    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0700682s)
	I0604 16:21:58.758150    5716 fix.go:103] recreateIfNeeded on no-preload-20220604161933-5712: state= err=unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:58.758209    5716 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:21:58.762022    5716 out.go:177] * docker "no-preload-20220604161933-5712" container is missing, will recreate.
	I0604 16:21:58.764120    5716 delete.go:124] DEMOLISHING no-preload-20220604161933-5712 ...
	I0604 16:21:58.776192    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:21:59.877806    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:21:59.877806    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.1016016s)
	W0604 16:21:59.877806    5716 stop.go:75] unable to get state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:59.878819    5716 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:21:59.892817    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:00.992801    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:00.992953    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0998011s)
	I0604 16:22:00.992953    5716 delete.go:82] Unable to get host status for no-preload-20220604161933-5712, assuming it has already been deleted: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:01.001308    5716 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220604161933-5712
	W0604 16:22:02.090941    5716 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:02.090999    5716 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220604161933-5712: (1.0896207s)
	I0604 16:22:02.090999    5716 kic.go:356] could not find the container no-preload-20220604161933-5712 to remove it. will try anyways
	I0604 16:22:02.097984    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:03.191628    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:03.191628    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0936314s)
	W0604 16:22:03.191628    5716 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:03.199788    5716 cli_runner.go:164] Run: docker exec --privileged -t no-preload-20220604161933-5712 /bin/bash -c "sudo init 0"
	W0604 16:22:04.298674    5716 cli_runner.go:211] docker exec --privileged -t no-preload-20220604161933-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:22:04.298767    5716 cli_runner.go:217] Completed: docker exec --privileged -t no-preload-20220604161933-5712 /bin/bash -c "sudo init 0": (1.0987877s)
	I0604 16:22:04.298800    5716 oci.go:625] error shutdown no-preload-20220604161933-5712: docker exec --privileged -t no-preload-20220604161933-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:05.321473    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:06.403961    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:06.403961    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.082356s)
	I0604 16:22:06.404273    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:06.404273    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:22:06.404273    5716 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:06.967666    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:08.075186    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:08.075186    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.1074537s)
	I0604 16:22:08.075186    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:08.075186    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:22:08.075186    5716 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:09.176941    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:10.298353    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:10.298353    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.1213546s)
	I0604 16:22:10.298353    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:10.298353    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:22:10.298353    5716 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:11.622976    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:12.698391    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:12.698391    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0754032s)
	I0604 16:22:12.698391    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:12.698391    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:22:12.698391    5716 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:14.302454    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:15.372304    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:15.372304    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0698379s)
	I0604 16:22:15.372304    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:15.372304    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:22:15.372304    5716 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:17.729933    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:18.853665    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:18.853972    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.1234932s)
	I0604 16:22:18.854060    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:18.854060    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:22:18.854060    5716 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:23.385586    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:24.458041    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:24.458239    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0723626s)
	I0604 16:22:24.458431    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:24.458431    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:22:24.458523    5716 oci.go:88] couldn't shut down no-preload-20220604161933-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	 
	I0604 16:22:24.466244    5716 cli_runner.go:164] Run: docker rm -f -v no-preload-20220604161933-5712
	I0604 16:22:25.591763    5716 cli_runner.go:217] Completed: docker rm -f -v no-preload-20220604161933-5712: (1.1254545s)
	I0604 16:22:25.599165    5716 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220604161933-5712
	W0604 16:22:26.684884    5716 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:26.684884    5716 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220604161933-5712: (1.0857073s)
	I0604 16:22:26.690849    5716 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:22:27.742687    5716 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:22:27.742687    5716 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0518264s)
	I0604 16:22:27.749707    5716 network_create.go:272] running [docker network inspect no-preload-20220604161933-5712] to gather additional debugging logs...
	I0604 16:22:27.749707    5716 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712
	W0604 16:22:28.834386    5716 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:28.834386    5716 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712: (1.084667s)
	I0604 16:22:28.854392    5716 network_create.go:275] error running [docker network inspect no-preload-20220604161933-5712]: docker network inspect no-preload-20220604161933-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220604161933-5712
	I0604 16:22:28.854947    5716 network_create.go:277] output of [docker network inspect no-preload-20220604161933-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220604161933-5712
	
	** /stderr **
	W0604 16:22:28.855944    5716 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:22:28.855944    5716 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:22:29.857235    5716 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:22:29.862325    5716 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:22:29.862986    5716 start.go:165] libmachine.API.Create for "no-preload-20220604161933-5712" (driver="docker")
	I0604 16:22:29.862986    5716 client.go:168] LocalClient.Create starting
	I0604 16:22:29.862986    5716 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:22:29.863557    5716 main.go:134] libmachine: Decoding PEM data...
	I0604 16:22:29.863720    5716 main.go:134] libmachine: Parsing certificate...
	I0604 16:22:29.863819    5716 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:22:29.863819    5716 main.go:134] libmachine: Decoding PEM data...
	I0604 16:22:29.863819    5716 main.go:134] libmachine: Parsing certificate...
	I0604 16:22:29.871665    5716 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:22:30.899258    5716 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:22:30.899288    5716 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0272796s)
	I0604 16:22:30.906607    5716 network_create.go:272] running [docker network inspect no-preload-20220604161933-5712] to gather additional debugging logs...
	I0604 16:22:30.906607    5716 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712
	W0604 16:22:31.941513    5716 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:31.941513    5716 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712: (1.0348951s)
	I0604 16:22:31.941513    5716 network_create.go:275] error running [docker network inspect no-preload-20220604161933-5712]: docker network inspect no-preload-20220604161933-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220604161933-5712
	I0604 16:22:31.941513    5716 network_create.go:277] output of [docker network inspect no-preload-20220604161933-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220604161933-5712
	
	** /stderr **
	I0604 16:22:31.949979    5716 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:22:33.018599    5716 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0684204s)
	I0604 16:22:33.038032    5716 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000624ad8] misses:0}
	I0604 16:22:33.038340    5716 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:22:33.038340    5716 network_create.go:115] attempt to create docker network no-preload-20220604161933-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:22:33.044184    5716 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712
	W0604 16:22:34.052899    5716 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:34.052899    5716 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: (1.0085683s)
	E0604 16:22:34.052899    5716 network_create.go:104] error while trying to create docker network no-preload-20220604161933-5712 192.168.49.0/24: create docker network no-preload-20220604161933-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ef310e7cd89d4bab5901cb45d774c6fddfd332118bef318f652922c34822f5fa (br-ef310e7cd89d): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:22:34.053468    5716 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220604161933-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ef310e7cd89d4bab5901cb45d774c6fddfd332118bef318f652922c34822f5fa (br-ef310e7cd89d): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220604161933-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ef310e7cd89d4bab5901cb45d774c6fddfd332118bef318f652922c34822f5fa (br-ef310e7cd89d): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:22:34.068660    5716 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:22:35.113466    5716 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0447941s)
	I0604 16:22:35.120466    5716 cli_runner.go:164] Run: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:22:36.188731    5716 cli_runner.go:211] docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:22:36.188903    5716 cli_runner.go:217] Completed: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0682533s)
	I0604 16:22:36.188903    5716 client.go:171] LocalClient.Create took 6.3258487s
	I0604 16:22:38.207206    5716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:22:38.215206    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:22:39.365394    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:39.365394    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.1501754s)
	I0604 16:22:39.365394    5716 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:39.547951    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:22:40.653428    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:40.653706    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.1054657s)
	W0604 16:22:40.653991    5716 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:22:40.653991    5716 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:40.664531    5716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:22:40.670547    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:22:41.773319    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:41.773319    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.1027593s)
	I0604 16:22:41.773319    5716 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:41.985680    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:22:43.047114    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:43.047114    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0614219s)
	W0604 16:22:43.047114    5716 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:22:43.047114    5716 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:43.047114    5716 start.go:134] duration metric: createHost completed in 13.1895112s
	I0604 16:22:43.059084    5716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:22:43.066362    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:22:44.142524    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:44.142572    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0759959s)
	I0604 16:22:44.142703    5716 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:44.491719    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:22:45.589792    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:45.589792    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0980213s)
	W0604 16:22:45.589792    5716 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:22:45.589792    5716 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:45.598792    5716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:22:45.605805    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:22:46.687958    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:46.688203    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0821411s)
	I0604 16:22:46.688377    5716 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:46.917212    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:22:48.030757    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:48.030757    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.1135327s)
	W0604 16:22:48.030757    5716 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:22:48.030757    5716 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:48.030757    5716 fix.go:57] fixHost completed within 50.3578891s
	I0604 16:22:48.030757    5716 start.go:81] releasing machines lock for "no-preload-20220604161933-5712", held for 50.3584299s
	W0604 16:22:48.030757    5716 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	W0604 16:22:48.030757    5716 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	
	I0604 16:22:48.030757    5716 start.go:614] Will try again in 5 seconds ...
	I0604 16:22:53.037083    5716 start.go:352] acquiring machines lock for no-preload-20220604161933-5712: {Name:mkb9157c767b2183b064e561f5ba73bb0b5648b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:22:53.037136    5716 start.go:356] acquired machines lock for "no-preload-20220604161933-5712" in 0s
	I0604 16:22:53.037136    5716 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:22:53.037136    5716 fix.go:55] fixHost starting: 
	I0604 16:22:53.053238    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:54.104500    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:54.104500    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0512506s)
	I0604 16:22:54.104500    5716 fix.go:103] recreateIfNeeded on no-preload-20220604161933-5712: state= err=unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:54.104500    5716 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:22:54.108501    5716 out.go:177] * docker "no-preload-20220604161933-5712" container is missing, will recreate.
	I0604 16:22:54.110505    5716 delete.go:124] DEMOLISHING no-preload-20220604161933-5712 ...
	I0604 16:22:54.123500    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:55.178020    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:55.178020    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0545082s)
	W0604 16:22:55.178020    5716 stop.go:75] unable to get state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:55.178020    5716 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:55.192022    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:56.233529    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:56.233597    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0413839s)
	I0604 16:22:56.233597    5716 delete.go:82] Unable to get host status for no-preload-20220604161933-5712, assuming it has already been deleted: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:56.240916    5716 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220604161933-5712
	W0604 16:22:57.305145    5716 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:22:57.305145    5716 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220604161933-5712: (1.0642169s)
	I0604 16:22:57.305145    5716 kic.go:356] could not find the container no-preload-20220604161933-5712 to remove it. will try anyways
	I0604 16:22:57.313140    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:22:58.378032    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:58.378032    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0648803s)
	W0604 16:22:58.378032    5716 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:22:58.385020    5716 cli_runner.go:164] Run: docker exec --privileged -t no-preload-20220604161933-5712 /bin/bash -c "sudo init 0"
	W0604 16:22:59.528130    5716 cli_runner.go:211] docker exec --privileged -t no-preload-20220604161933-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:22:59.528297    5716 cli_runner.go:217] Completed: docker exec --privileged -t no-preload-20220604161933-5712 /bin/bash -c "sudo init 0": (1.1430975s)
	I0604 16:22:59.528297    5716 oci.go:625] error shutdown no-preload-20220604161933-5712: docker exec --privileged -t no-preload-20220604161933-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:00.540708    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:23:01.656980    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:01.656980    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.1162601s)
	I0604 16:23:01.656980    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:01.656980    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:23:01.656980    5716 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:02.152323    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:23:03.230871    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:03.230871    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.078536s)
	I0604 16:23:03.230871    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:03.230871    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:23:03.230871    5716 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:03.840712    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:23:04.929903    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:04.929903    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.089179s)
	I0604 16:23:04.929903    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:04.929903    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:23:04.929903    5716 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:05.842258    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:23:06.971826    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:06.971826    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.1295258s)
	I0604 16:23:06.971826    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:06.971826    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:23:06.971826    5716 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:08.982299    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:23:10.072736    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:10.072736    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.0904254s)
	I0604 16:23:10.072736    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:10.072736    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:23:10.072736    5716 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:11.913885    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:23:12.951498    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:12.951498    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.037601s)
	I0604 16:23:12.951498    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:12.951498    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:23:12.951498    5716 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:15.631850    5716 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:23:16.736064    5716 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:16.736064    5716 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (1.1042019s)
	I0604 16:23:16.736064    5716 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:16.736064    5716 oci.go:639] temporary error: container no-preload-20220604161933-5712 status is  but expect it to be exited
	I0604 16:23:16.736064    5716 oci.go:88] couldn't shut down no-preload-20220604161933-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	 
	I0604 16:23:16.744463    5716 cli_runner.go:164] Run: docker rm -f -v no-preload-20220604161933-5712
	I0604 16:23:17.844118    5716 cli_runner.go:217] Completed: docker rm -f -v no-preload-20220604161933-5712: (1.0996436s)
	I0604 16:23:17.851118    5716 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220604161933-5712
	W0604 16:23:18.940288    5716 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:23:18.940288    5716 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220604161933-5712: (1.0891576s)
	I0604 16:23:18.948092    5716 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:23:20.052335    5716 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:23:20.052392    5716 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1040471s)
	I0604 16:23:20.060787    5716 network_create.go:272] running [docker network inspect no-preload-20220604161933-5712] to gather additional debugging logs...
	I0604 16:23:20.060787    5716 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712
	W0604 16:23:21.138198    5716 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:23:21.138404    5716 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712: (1.0773998s)
	I0604 16:23:21.138467    5716 network_create.go:275] error running [docker network inspect no-preload-20220604161933-5712]: docker network inspect no-preload-20220604161933-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220604161933-5712
	I0604 16:23:21.138467    5716 network_create.go:277] output of [docker network inspect no-preload-20220604161933-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220604161933-5712
	
	** /stderr **
	W0604 16:23:21.139194    5716 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:23:21.139194    5716 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:23:22.149223    5716 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:23:22.152590    5716 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:23:22.153198    5716 start.go:165] libmachine.API.Create for "no-preload-20220604161933-5712" (driver="docker")
	I0604 16:23:22.153198    5716 client.go:168] LocalClient.Create starting
	I0604 16:23:22.153728    5716 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:23:22.153882    5716 main.go:134] libmachine: Decoding PEM data...
	I0604 16:23:22.153882    5716 main.go:134] libmachine: Parsing certificate...
	I0604 16:23:22.153882    5716 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:23:22.154426    5716 main.go:134] libmachine: Decoding PEM data...
	I0604 16:23:22.154497    5716 main.go:134] libmachine: Parsing certificate...
	I0604 16:23:22.163213    5716 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:23:23.339932    5716 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:23:23.339932    5716 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1767061s)
	I0604 16:23:23.346932    5716 network_create.go:272] running [docker network inspect no-preload-20220604161933-5712] to gather additional debugging logs...
	I0604 16:23:23.346932    5716 cli_runner.go:164] Run: docker network inspect no-preload-20220604161933-5712
	W0604 16:23:24.429541    5716 cli_runner.go:211] docker network inspect no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:23:24.429541    5716 cli_runner.go:217] Completed: docker network inspect no-preload-20220604161933-5712: (1.0825969s)
	I0604 16:23:24.429541    5716 network_create.go:275] error running [docker network inspect no-preload-20220604161933-5712]: docker network inspect no-preload-20220604161933-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220604161933-5712
	I0604 16:23:24.429541    5716 network_create.go:277] output of [docker network inspect no-preload-20220604161933-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220604161933-5712
	
	** /stderr **
	I0604 16:23:24.437542    5716 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:23:25.497440    5716 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0598866s)
	I0604 16:23:25.513456    5716 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000624ad8] amended:false}} dirty:map[] misses:0}
	I0604 16:23:25.513456    5716 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:23:25.528696    5716 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000624ad8] amended:true}} dirty:map[192.168.49.0:0xc000624ad8 192.168.58.0:0xc000006cc8] misses:0}
	I0604 16:23:25.529611    5716 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:23:25.529611    5716 network_create.go:115] attempt to create docker network no-preload-20220604161933-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:23:25.536886    5716 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712
	W0604 16:23:26.623356    5716 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:23:26.623525    5716 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: (1.0863005s)
	E0604 16:23:26.623596    5716 network_create.go:104] error while trying to create docker network no-preload-20220604161933-5712 192.168.58.0/24: create docker network no-preload-20220604161933-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b8edc7bf37775805abb77bb50fe8afda44c0e03a73460da53190b4636e8dde98 (br-b8edc7bf3777): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:23:26.623596    5716 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220604161933-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b8edc7bf37775805abb77bb50fe8afda44c0e03a73460da53190b4636e8dde98 (br-b8edc7bf3777): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220604161933-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220604161933-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b8edc7bf37775805abb77bb50fe8afda44c0e03a73460da53190b4636e8dde98 (br-b8edc7bf3777): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:23:26.644837    5716 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:23:27.739746    5716 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0948966s)
	I0604 16:23:27.746861    5716 cli_runner.go:164] Run: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:23:28.916679    5716 cli_runner.go:211] docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:23:28.916679    5716 cli_runner.go:217] Completed: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: (1.1698053s)
	I0604 16:23:28.916679    5716 client.go:171] LocalClient.Create took 6.7634073s
	I0604 16:23:30.938306    5716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:23:30.947286    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:23:32.089857    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:23:32.089857    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.1425578s)
	I0604 16:23:32.089857    5716 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:32.380980    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:23:33.468791    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:23:33.468791    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0877988s)
	W0604 16:23:33.468791    5716 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:23:33.468791    5716 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:33.480753    5716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:23:33.488757    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:23:34.546833    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:23:34.546833    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0579981s)
	I0604 16:23:34.546892    5716 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:34.757134    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:23:35.839626    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:23:35.839626    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0824454s)
	W0604 16:23:35.839626    5716 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:23:35.839626    5716 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:35.839626    5716 start.go:134] duration metric: createHost completed in 13.6900881s
	I0604 16:23:35.849646    5716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:23:35.855621    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:23:36.979089    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:23:36.979089    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.1234551s)
	I0604 16:23:36.979089    5716 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:37.310104    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:23:38.427380    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:23:38.427380    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.1172642s)
	W0604 16:23:38.427380    5716 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:23:38.427380    5716 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:38.437411    5716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:23:38.444415    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:23:39.576105    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:23:39.576105    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.1314636s)
	I0604 16:23:39.576105    5716 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:39.929530    5716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712
	W0604 16:23:41.020650    5716 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712 returned with exit code 1
	I0604 16:23:41.020886    5716 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: (1.0911082s)
	W0604 16:23:41.020945    5716 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:23:41.020945    5716 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220604161933-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220604161933-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	I0604 16:23:41.020945    5716 fix.go:57] fixHost completed within 47.9832862s
	I0604 16:23:41.020945    5716 start.go:81] releasing machines lock for "no-preload-20220604161933-5712", held for 47.9832862s
	W0604 16:23:41.021481    5716 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-20220604161933-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p no-preload-20220604161933-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	
	I0604 16:23:41.026991    5716 out.go:177] 
	W0604 16:23:41.028431    5716 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220604161933-5712 container: docker volume create no-preload-20220604161933-5712 --label name.minikube.sigs.k8s.io=no-preload-20220604161933-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220604161933-5712: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220604161933-5712': mkdir /var/lib/docker/volumes/no-preload-20220604161933-5712: read-only file system
	
	W0604 16:23:41.028431    5716 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:23:41.029365    5716 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:23:41.031953    5716 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-20220604161933-5712 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220604161933-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220604161933-5712: exit status 1 (1.1499903s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (2.9920477s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:45.383309    4164 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220604161933-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (118.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (81.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220604162205-5712 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220604162205-5712 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m17.2275793s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220604162205-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node default-k8s-different-port-20220604162205-5712 in cluster default-k8s-different-port-20220604162205-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-different-port-20220604162205-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:22:05.286466    9088 out.go:296] Setting OutFile to fd 1928 ...
	I0604 16:22:05.346473    9088 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:22:05.346473    9088 out.go:309] Setting ErrFile to fd 1544...
	I0604 16:22:05.346473    9088 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:22:05.357474    9088 out.go:303] Setting JSON to false
	I0604 16:22:05.359469    9088 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10797,"bootTime":1654348928,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:22:05.360471    9088 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:22:05.367468    9088 out.go:177] * [default-k8s-different-port-20220604162205-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:22:05.375468    9088 notify.go:193] Checking for updates...
	I0604 16:22:05.377478    9088 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:22:05.380469    9088 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:22:05.382469    9088 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:22:05.384486    9088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:22:05.392473    9088 config.go:178] Loaded profile config "embed-certs-20220604161913-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:22:05.392473    9088 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:22:05.393478    9088 config.go:178] Loaded profile config "no-preload-20220604161933-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:22:05.393478    9088 config.go:178] Loaded profile config "old-k8s-version-20220604161852-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0604 16:22:05.393478    9088 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:22:08.137665    9088 docker.go:137] docker version: linux-20.10.16
	I0604 16:22:08.148217    9088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:22:10.204629    9088 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0563901s)
	I0604 16:22:10.204629    9088 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:22:09.1787715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:22:10.208637    9088 out.go:177] * Using the docker driver based on user configuration
	I0604 16:22:10.211623    9088 start.go:284] selected driver: docker
	I0604 16:22:10.212634    9088 start.go:806] validating driver "docker" against <nil>
	I0604 16:22:10.212634    9088 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:22:10.281381    9088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:22:12.226227    9088 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9448248s)
	I0604 16:22:12.226227    9088 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:22:11.2415155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:22:12.226227    9088 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:22:12.227224    9088 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:22:12.230228    9088 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:22:12.232224    9088 cni.go:95] Creating CNI manager for ""
	I0604 16:22:12.232224    9088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:22:12.232224    9088 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220604162205-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220604162205-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:22:12.235225    9088 out.go:177] * Starting control plane node default-k8s-different-port-20220604162205-5712 in cluster default-k8s-different-port-20220604162205-5712
	I0604 16:22:12.241235    9088 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:22:12.244229    9088 out.go:177] * Pulling base image ...
	I0604 16:22:12.247224    9088 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:22:12.247224    9088 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:22:12.247224    9088 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:22:12.247224    9088 cache.go:57] Caching tarball of preloaded images
	I0604 16:22:12.248229    9088 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:22:12.248229    9088 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:22:12.248229    9088 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20220604162205-5712\config.json ...
	I0604 16:22:12.248229    9088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20220604162205-5712\config.json: {Name:mk0324025495738f2c6cac3c7207ecd896689b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:22:13.395267    9088 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:22:13.395267    9088 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:22:13.395267    9088 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:22:13.395267    9088 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:22:13.395267    9088 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:22:13.395267    9088 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:22:13.395267    9088 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:22:13.395267    9088 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:22:13.395267    9088 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:22:15.717919    9088 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:22:15.718020    9088 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:22:15.718020    9088 start.go:352] acquiring machines lock for default-k8s-different-port-20220604162205-5712: {Name:mka7c4079f67ca8a42486acaf1dd6d7206313e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:22:15.718020    9088 start.go:356] acquired machines lock for "default-k8s-different-port-20220604162205-5712" in 0s
	I0604 16:22:15.718020    9088 start.go:91] Provisioning new machine with config: &{Name:default-k8s-different-port-20220604162205-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-por
t-20220604162205-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:22:15.718674    9088 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:22:15.722376    9088 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:22:15.722376    9088 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220604162205-5712" (driver="docker")
	I0604 16:22:15.722900    9088 client.go:168] LocalClient.Create starting
	I0604 16:22:15.723161    9088 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:22:15.723687    9088 main.go:134] libmachine: Decoding PEM data...
	I0604 16:22:15.723687    9088 main.go:134] libmachine: Parsing certificate...
	I0604 16:22:15.723975    9088 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:22:15.723975    9088 main.go:134] libmachine: Decoding PEM data...
	I0604 16:22:15.723975    9088 main.go:134] libmachine: Parsing certificate...
	I0604 16:22:15.735271    9088 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:22:16.802291    9088 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:22:16.802291    9088 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0670081s)
	I0604 16:22:16.809178    9088 network_create.go:272] running [docker network inspect default-k8s-different-port-20220604162205-5712] to gather additional debugging logs...
	I0604 16:22:16.809178    9088 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712
	W0604 16:22:17.860397    9088 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:22:17.860397    9088 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712: (1.0512077s)
	I0604 16:22:17.860397    9088 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220604162205-5712]: docker network inspect default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220604162205-5712
	I0604 16:22:17.860397    9088 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220604162205-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220604162205-5712
	
	** /stderr **
	I0604 16:22:17.872500    9088 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:22:18.948280    9088 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0757688s)
	I0604 16:22:18.976336    9088 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006200] misses:0}
	I0604 16:22:18.976479    9088 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:22:18.976479    9088 network_create.go:115] attempt to create docker network default-k8s-different-port-20220604162205-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:22:18.984153    9088 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712
	W0604 16:22:20.018100    9088 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:22:20.018100    9088 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: (1.0339355s)
	E0604 16:22:20.018100    9088 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220604162205-5712 192.168.49.0/24: create docker network default-k8s-different-port-20220604162205-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a41bffe753634c201196ccff8413eb68715bad15ac14b88f599cd8f8c1f7be90 (br-a41bffe75363): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:22:20.018100    9088 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220604162205-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a41bffe753634c201196ccff8413eb68715bad15ac14b88f599cd8f8c1f7be90 (br-a41bffe75363): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220604162205-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a41bffe753634c201196ccff8413eb68715bad15ac14b88f599cd8f8c1f7be90 (br-a41bffe75363): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:22:20.032531    9088 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:22:21.064514    9088 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0319716s)
	I0604 16:22:21.071942    9088 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:22:22.141850    9088 cli_runner.go:211] docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:22:22.141893    9088 cli_runner.go:217] Completed: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0697302s)
	I0604 16:22:22.141963    9088 client.go:171] LocalClient.Create took 6.4189925s
	I0604 16:22:24.157611    9088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:22:24.164487    9088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:22:25.293893    9088 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:22:25.293893    9088 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.1292428s)
	I0604 16:22:25.293893    9088 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:25.584511    9088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:22:26.668877    9088 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:22:26.668877    9088 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0843543s)
	W0604 16:22:26.668877    9088 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:22:26.668877    9088 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:26.678889    9088 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:22:26.684884    9088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:22:27.758700    9088 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:22:27.758700    9088 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0728142s)
	I0604 16:22:27.758700    9088 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:28.069792    9088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:22:29.181398    9088 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:22:29.181493    9088 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.1115938s)
	W0604 16:22:29.181578    9088 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:22:29.181578    9088 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:29.181578    9088 start.go:134] duration metric: createHost completed in 13.4627574s
	I0604 16:22:29.181578    9088 start.go:81] releasing machines lock for "default-k8s-different-port-20220604162205-5712", held for 13.4634117s
	W0604 16:22:29.181578    9088 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	I0604 16:22:29.197814    9088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:22:30.251460    9088 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:30.251460    9088 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0532804s)
	I0604 16:22:30.251534    9088 delete.go:82] Unable to get host status for default-k8s-different-port-20220604162205-5712, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	W0604 16:22:30.251702    9088 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	
	I0604 16:22:30.251702    9088 start.go:614] Will try again in 5 seconds ...
	I0604 16:22:35.252098    9088 start.go:352] acquiring machines lock for default-k8s-different-port-20220604162205-5712: {Name:mka7c4079f67ca8a42486acaf1dd6d7206313e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:22:35.252098    9088 start.go:356] acquired machines lock for "default-k8s-different-port-20220604162205-5712" in 0s
	I0604 16:22:35.252098    9088 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:22:35.252616    9088 fix.go:55] fixHost starting: 
	I0604 16:22:35.267508    9088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:22:36.359642    9088 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:36.359773    9088 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.091944s)
	I0604 16:22:36.359865    9088 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220604162205-5712: state= err=unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:36.359909    9088 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:22:36.364575    9088 out.go:177] * docker "default-k8s-different-port-20220604162205-5712" container is missing, will recreate.
	I0604 16:22:36.366459    9088 delete.go:124] DEMOLISHING default-k8s-different-port-20220604162205-5712 ...
	I0604 16:22:36.381338    9088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:22:37.471354    9088 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:37.471415    9088 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0898758s)
	W0604 16:22:37.471415    9088 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:37.471415    9088 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:37.486314    9088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:22:38.585102    9088 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:38.585102    9088 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0987766s)
	I0604 16:22:38.585102    9088 delete.go:82] Unable to get host status for default-k8s-different-port-20220604162205-5712, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:38.595585    9088 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712
	W0604 16:22:39.695003    9088 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:22:39.695003    9088 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712: (1.0994063s)
	I0604 16:22:39.695003    9088 kic.go:356] could not find the container default-k8s-different-port-20220604162205-5712 to remove it. will try anyways
	I0604 16:22:39.703002    9088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:22:40.812528    9088 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:40.812528    9088 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.1095146s)
	W0604 16:22:40.812528    9088 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:40.820574    9088 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-different-port-20220604162205-5712 /bin/bash -c "sudo init 0"
	W0604 16:22:41.881933    9088 cli_runner.go:211] docker exec --privileged -t default-k8s-different-port-20220604162205-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:22:41.881933    9088 cli_runner.go:217] Completed: docker exec --privileged -t default-k8s-different-port-20220604162205-5712 /bin/bash -c "sudo init 0": (1.0613474s)
	I0604 16:22:41.881933    9088 oci.go:625] error shutdown default-k8s-different-port-20220604162205-5712: docker exec --privileged -t default-k8s-different-port-20220604162205-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:42.899804    9088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:22:43.998808    9088 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:43.998808    9088 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0989916s)
	I0604 16:22:43.998808    9088 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:43.998808    9088 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:22:43.998808    9088 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:44.472791    9088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:22:45.574612    9088 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:45.574750    9088 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.1018092s)
	I0604 16:22:45.574750    9088 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:45.574750    9088 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:22:45.574750    9088 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:46.472915    9088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:22:47.555850    9088 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:47.555850    9088 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0829232s)
	I0604 16:22:47.555850    9088 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:47.555850    9088 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:22:47.555850    9088 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:48.210665    9088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:22:49.226460    9088 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:49.226460    9088 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0157833s)
	I0604 16:22:49.226460    9088 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:49.226460    9088 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:22:49.226460    9088 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:50.353677    9088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:22:51.421917    9088 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:51.421917    9088 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0672855s)
	I0604 16:22:51.421917    9088 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:51.421917    9088 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:22:51.421917    9088 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:52.942777    9088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:22:54.011903    9088 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:54.012049    9088 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0690417s)
	I0604 16:22:54.012049    9088 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:54.012049    9088 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:22:54.012049    9088 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:57.066054    9088 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:22:58.143983    9088 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:22:58.143983    9088 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0773s)
	I0604 16:22:58.143983    9088 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:22:58.143983    9088 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:22:58.143983    9088 oci.go:88] couldn't shut down default-k8s-different-port-20220604162205-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	 
	I0604 16:22:58.150977    9088 cli_runner.go:164] Run: docker rm -f -v default-k8s-different-port-20220604162205-5712
	I0604 16:22:59.294524    9088 cli_runner.go:217] Completed: docker rm -f -v default-k8s-different-port-20220604162205-5712: (1.1435351s)
	I0604 16:22:59.302949    9088 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712
	W0604 16:23:00.465714    9088 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:00.465714    9088 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712: (1.1626069s)
	I0604 16:23:00.472712    9088 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:23:01.577015    9088 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:23:01.577015    9088 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1035253s)
	I0604 16:23:01.583979    9088 network_create.go:272] running [docker network inspect default-k8s-different-port-20220604162205-5712] to gather additional debugging logs...
	I0604 16:23:01.583979    9088 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712
	W0604 16:23:02.646023    9088 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:02.646053    9088 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712: (1.0618988s)
	I0604 16:23:02.646126    9088 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220604162205-5712]: docker network inspect default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220604162205-5712
	I0604 16:23:02.646126    9088 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220604162205-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220604162205-5712
	
	** /stderr **
	W0604 16:23:02.647118    9088 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:23:02.647118    9088 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:23:03.657280    9088 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:23:03.657280    9088 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:23:03.657280    9088 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220604162205-5712" (driver="docker")
	I0604 16:23:03.657280    9088 client.go:168] LocalClient.Create starting
	I0604 16:23:03.657280    9088 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:23:03.657280    9088 main.go:134] libmachine: Decoding PEM data...
	I0604 16:23:03.657280    9088 main.go:134] libmachine: Parsing certificate...
	I0604 16:23:03.657280    9088 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:23:03.657280    9088 main.go:134] libmachine: Decoding PEM data...
	I0604 16:23:03.657280    9088 main.go:134] libmachine: Parsing certificate...
	I0604 16:23:03.674692    9088 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:23:04.754416    9088 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:23:04.754416    9088 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.079712s)
	I0604 16:23:04.763460    9088 network_create.go:272] running [docker network inspect default-k8s-different-port-20220604162205-5712] to gather additional debugging logs...
	I0604 16:23:04.763460    9088 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712
	W0604 16:23:05.862932    9088 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:05.862932    9088 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712: (1.0994601s)
	I0604 16:23:05.862932    9088 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220604162205-5712]: docker network inspect default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220604162205-5712
	I0604 16:23:05.862932    9088 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220604162205-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220604162205-5712
	
	** /stderr **
	I0604 16:23:05.869939    9088 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:23:06.955566    9088 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0856151s)
	I0604 16:23:06.971826    9088 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006200] amended:false}} dirty:map[] misses:0}
	I0604 16:23:06.971826    9088 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:23:06.987976    9088 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006200] amended:true}} dirty:map[192.168.49.0:0xc000006200 192.168.58.0:0xc00058c2e0] misses:0}
	I0604 16:23:06.987976    9088 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:23:06.987976    9088 network_create.go:115] attempt to create docker network default-k8s-different-port-20220604162205-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:23:06.995081    9088 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712
	W0604 16:23:08.057408    9088 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:08.057408    9088 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: (1.0623149s)
	E0604 16:23:08.057408    9088 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220604162205-5712 192.168.58.0/24: create docker network default-k8s-different-port-20220604162205-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c34ee6dda54ded6620d4d7acaf5b177e8e17690b6b9e972d0c38dfc76d0ef466 (br-c34ee6dda54d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:23:08.057408    9088 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220604162205-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c34ee6dda54ded6620d4d7acaf5b177e8e17690b6b9e972d0c38dfc76d0ef466 (br-c34ee6dda54d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220604162205-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c34ee6dda54ded6620d4d7acaf5b177e8e17690b6b9e972d0c38dfc76d0ef466 (br-c34ee6dda54d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:23:08.074352    9088 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:23:09.212182    9088 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1378175s)
	I0604 16:23:09.220943    9088 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:23:10.350633    9088 cli_runner.go:211] docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:23:10.350633    9088 cli_runner.go:217] Completed: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: (1.1296776s)
	I0604 16:23:10.350633    9088 client.go:171] LocalClient.Create took 6.6932806s
	I0604 16:23:12.380341    9088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:23:12.387156    9088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:23:13.485566    9088 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:13.485745    9088 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.098398s)
	I0604 16:23:13.485745    9088 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:13.824845    9088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:23:14.920815    9088 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:14.920815    9088 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0959578s)
	W0604 16:23:14.920815    9088 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:23:14.920815    9088 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:14.930806    9088 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:23:14.936812    9088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:23:16.045035    9088 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:16.045035    9088 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.1080969s)
	I0604 16:23:16.045035    9088 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:16.274294    9088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:23:17.360080    9088 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:17.360157    9088 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0846669s)
	W0604 16:23:17.360157    9088 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:23:17.360157    9088 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:17.360157    9088 start.go:134] duration metric: createHost completed in 13.7027274s
	I0604 16:23:17.371117    9088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:23:17.379338    9088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:23:18.474745    9088 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:18.474745    9088 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0953951s)
	I0604 16:23:18.474745    9088 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:18.736669    9088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:23:19.849330    9088 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:19.849330    9088 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.1125536s)
	W0604 16:23:19.849529    9088 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:23:19.849586    9088 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:19.859676    9088 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:23:19.866223    9088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:23:20.966545    9088 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:20.966545    9088 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.1003097s)
	I0604 16:23:20.966545    9088 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:21.178282    9088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:23:22.243052    9088 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:22.243113    9088 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0645284s)
	W0604 16:23:22.243113    9088 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:23:22.243113    9088 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:22.243113    9088 fix.go:57] fixHost completed within 46.9899848s
	I0604 16:23:22.243113    9088 start.go:81] releasing machines lock for "default-k8s-different-port-20220604162205-5712", held for 46.9905024s
	W0604 16:23:22.243773    9088 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220604162205-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220604162205-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	
	I0604 16:23:22.248269    9088 out.go:177] 
	W0604 16:23:22.250330    9088 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	
	W0604 16:23:22.250330    9088 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:23:22.250330    9088 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:23:22.254207    9088 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220604162205-5712 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220604162205-5712

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220604162205-5712: exit status 1 (1.1403047s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (2.9802093s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:26.479158    5172 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220604162205-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (81.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (4.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20220604161852-5712" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220604161852-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220604161852-5712: exit status 1 (1.1594464s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (3.014293s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:08.416044    3320 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220604161852-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (4.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (4.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20220604161852-5712" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220604161852-5712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220604161852-5712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (227.3149ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220604161852-5712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220604161852-5712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220604161852-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220604161852-5712: exit status 1 (1.1714377s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (2.8601777s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:12.683767    4476 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220604161852-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (4.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (7.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220604161852-5712 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220604161852-5712 "sudo crictl images -o json": exit status 80 (3.2380597s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_6.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220604161852-5712 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.16.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220604161852-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220604161852-5712: exit status 1 (1.1805778s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (3.0203482s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:20.130519    6692 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220604161852-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (7.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (11.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20220604161852-5712 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220604161852-5712 --alsologtostderr -v=1: exit status 80 (3.3364052s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:23:20.411153    7640 out.go:296] Setting OutFile to fd 1416 ...
	I0604 16:23:20.470258    7640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:23:20.470258    7640 out.go:309] Setting ErrFile to fd 1660...
	I0604 16:23:20.470258    7640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:23:20.481808    7640 out.go:303] Setting JSON to false
	I0604 16:23:20.481808    7640 mustload.go:65] Loading cluster: old-k8s-version-20220604161852-5712
	I0604 16:23:20.481808    7640 config.go:178] Loaded profile config "old-k8s-version-20220604161852-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0604 16:23:20.496077    7640 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}
	W0604 16:23:23.121329    7640 cli_runner.go:211] docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:23.121329    7640 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: (2.6252235s)
	I0604 16:23:23.128332    7640 out.go:177] 
	W0604 16:23:23.131337    7640 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712
	
	W0604 16:23:23.131337    7640 out.go:239] * 
	* 
	W0604 16:23:23.458935    7640 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_12.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_12.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 16:23:23.461932    7640 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220604161852-5712 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220604161852-5712

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220604161852-5712: exit status 1 (1.1316096s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (2.9461585s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:27.553511    8388 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220604161852-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220604161852-5712

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220604161852-5712: exit status 1 (1.1830541s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220604161852-5712 -n old-k8s-version-20220604161852-5712: exit status 7 (2.9523526s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:31.713675    5336 status.go:247] status error: host: state: unknown state "old-k8s-version-20220604161852-5712": docker container inspect old-k8s-version-20220604161852-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220604161852-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220604161852-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (11.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220604162205-5712 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220604162205-5712 create -f testdata\busybox.yaml: exit status 1 (234.2126ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220604162205-5712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context default-k8s-different-port-20220604162205-5712 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220604162205-5712

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220604162205-5712: exit status 1 (1.202542s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (2.9580143s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:30.880582    4060 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220604162205-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220604162205-5712

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220604162205-5712: exit status 1 (1.1517938s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (2.9863083s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:35.044244    8736 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220604162205-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (4.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20220604161913-5712" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220604161913-5712: exit status 1 (1.2042933s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (2.8817938s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:31.304360    3124 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220604161913-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (4.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (4.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20220604161913-5712" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220604161913-5712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context embed-certs-20220604161913-5712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (248.5619ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220604161913-5712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-20220604161913-5712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220604161913-5712: exit status 1 (1.1791342s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (3.0441503s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:35.791620    2064 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220604161913-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (4.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (7.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220604162205-5712 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220604162205-5712 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.9751089s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220604162205-5712 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220604162205-5712 describe deploy/metrics-server -n kube-system: exit status 1 (248.4325ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220604162205-5712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-different-port-20220604162205-5712 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220604162205-5712

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220604162205-5712: exit status 1 (1.1936059s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (3.0834156s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:42.549216    1700 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220604162205-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (7.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (7.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20220604161913-5712 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p embed-certs-20220604161913-5712 "sudo crictl images -o json": exit status 80 (3.2525557s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_6.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p embed-certs-20220604161913-5712 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220604161913-5712: exit status 1 (1.1817797s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (3.0191404s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:43.240237    8756 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220604161913-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (7.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (27.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220604162205-5712 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220604162205-5712 --alsologtostderr -v=3: exit status 82 (22.9829473s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-different-port-20220604162205-5712"  ...
	* Stopping node "default-k8s-different-port-20220604162205-5712"  ...
	* Stopping node "default-k8s-different-port-20220604162205-5712"  ...
	* Stopping node "default-k8s-different-port-20220604162205-5712"  ...
	* Stopping node "default-k8s-different-port-20220604162205-5712"  ...
	* Stopping node "default-k8s-different-port-20220604162205-5712"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:23:42.812912    4556 out.go:296] Setting OutFile to fd 1512 ...
	I0604 16:23:42.870913    4556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:23:42.870913    4556 out.go:309] Setting ErrFile to fd 1756...
	I0604 16:23:42.870913    4556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:23:42.881925    4556 out.go:303] Setting JSON to false
	I0604 16:23:42.882825    4556 daemonize_windows.go:44] trying to kill existing schedule stop for profile default-k8s-different-port-20220604162205-5712...
	I0604 16:23:42.894274    4556 ssh_runner.go:195] Run: systemctl --version
	I0604 16:23:42.900888    4556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:23:45.553026    4556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:45.553026    4556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (2.652109s)
	I0604 16:23:45.562025    4556 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0604 16:23:45.569052    4556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:23:46.695077    4556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:46.695077    4556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.1260124s)
	I0604 16:23:46.695077    4556 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:47.077627    4556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:23:48.185483    4556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:23:48.185580    4556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.1077641s)
	I0604 16:23:48.185856    4556 openrc.go:165] stop output: 
	E0604 16:23:48.185895    4556 daemonize_windows.go:38] error terminating scheduled stop for profile default-k8s-different-port-20220604162205-5712: stopping schedule-stop service for profile default-k8s-different-port-20220604162205-5712: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:48.185958    4556 mustload.go:65] Loading cluster: default-k8s-different-port-20220604162205-5712
	I0604 16:23:48.186686    4556 config.go:178] Loaded profile config "default-k8s-different-port-20220604162205-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:23:48.186956    4556 stop.go:39] StopHost: default-k8s-different-port-20220604162205-5712
	I0604 16:23:48.191019    4556 out.go:177] * Stopping node "default-k8s-different-port-20220604162205-5712"  ...
	I0604 16:23:48.209391    4556 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:23:49.352808    4556 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:49.352808    4556 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.1434046s)
	W0604 16:23:49.352808    4556 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	W0604 16:23:49.352808    4556 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:49.352808    4556 retry.go:31] will retry after 937.714187ms: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:50.294235    4556 stop.go:39] StopHost: default-k8s-different-port-20220604162205-5712
	I0604 16:23:50.299378    4556 out.go:177] * Stopping node "default-k8s-different-port-20220604162205-5712"  ...
	I0604 16:23:50.315327    4556 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:23:51.480799    4556 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:51.480922    4556 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.1654599s)
	W0604 16:23:51.480995    4556 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	W0604 16:23:51.480995    4556 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:51.481071    4556 retry.go:31] will retry after 1.386956246s: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:52.873561    4556 stop.go:39] StopHost: default-k8s-different-port-20220604162205-5712
	I0604 16:23:52.880044    4556 out.go:177] * Stopping node "default-k8s-different-port-20220604162205-5712"  ...
	I0604 16:23:52.898617    4556 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:23:53.973679    4556 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:53.973679    4556 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0750193s)
	W0604 16:23:53.973679    4556 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	W0604 16:23:53.973679    4556 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:53.973679    4556 retry.go:31] will retry after 2.670351914s: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:56.652728    4556 stop.go:39] StopHost: default-k8s-different-port-20220604162205-5712
	I0604 16:23:56.658199    4556 out.go:177] * Stopping node "default-k8s-different-port-20220604162205-5712"  ...
	I0604 16:23:56.673778    4556 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:23:57.772010    4556 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:57.772010    4556 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0982207s)
	W0604 16:23:57.772010    4556 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	W0604 16:23:57.772010    4556 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:57.772010    4556 retry.go:31] will retry after 1.909024939s: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:23:59.689028    4556 stop.go:39] StopHost: default-k8s-different-port-20220604162205-5712
	I0604 16:23:59.693775    4556 out.go:177] * Stopping node "default-k8s-different-port-20220604162205-5712"  ...
	I0604 16:23:59.710233    4556 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:00.790169    4556 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:00.790169    4556 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0797705s)
	W0604 16:24:00.790169    4556 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	W0604 16:24:00.790169    4556 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:00.790169    4556 retry.go:31] will retry after 3.323628727s: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:04.117677    4556 stop.go:39] StopHost: default-k8s-different-port-20220604162205-5712
	I0604 16:24:04.122343    4556 out.go:177] * Stopping node "default-k8s-different-port-20220604162205-5712"  ...
	I0604 16:24:04.137188    4556 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:05.250138    4556 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:05.250138    4556 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.1129378s)
	W0604 16:24:05.250513    4556 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	W0604 16:24:05.250591    4556 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:05.254013    4556 out.go:177] 
	W0604 16:24:05.257172    4556 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-different-port-20220604162205-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-different-port-20220604162205-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:24:05.257172    4556 out.go:239] * 
	* 
	W0604 16:24:05.517260    4556 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 16:24:05.521262    4556 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220604162205-5712 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220604162205-5712

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220604162205-5712: exit status 1 (1.1255327s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (2.9774274s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:24:09.661332    6568 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220604162205-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Stop (27.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (11.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20220604161913-5712 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p embed-certs-20220604161913-5712 --alsologtostderr -v=1: exit status 80 (3.2632844s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:23:43.517919    1536 out.go:296] Setting OutFile to fd 1992 ...
	I0604 16:23:43.575902    1536 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:23:43.575902    1536 out.go:309] Setting ErrFile to fd 1792...
	I0604 16:23:43.575902    1536 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:23:43.589162    1536 out.go:303] Setting JSON to false
	I0604 16:23:43.589162    1536 mustload.go:65] Loading cluster: embed-certs-20220604161913-5712
	I0604 16:23:43.589790    1536 config.go:178] Loaded profile config "embed-certs-20220604161913-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:23:43.610215    1536 cli_runner.go:164] Run: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}
	W0604 16:23:46.216693    1536 cli_runner.go:211] docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:23:46.216693    1536 cli_runner.go:217] Completed: docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: (2.6061939s)
	I0604 16:23:46.224712    1536 out.go:177] 
	W0604 16:23:46.227827    1536 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712
	
	W0604 16:23:46.227827    1536 out.go:239] * 
	* 
	W0604 16:23:46.491396    1536 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_12.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_12.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 16:23:46.495420    1536 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p embed-certs-20220604161913-5712 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220604161913-5712: exit status 1 (1.15767s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (3.0014414s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:50.687663    8608 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220604161913-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220604161913-5712: exit status 1 (1.1624958s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220604161913-5712 -n embed-certs-20220604161913-5712: exit status 7 (2.9334543s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:54.792993    7516 status.go:247] status error: host: state: unknown state "embed-certs-20220604161913-5712": docker container inspect embed-certs-20220604161913-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220604161913-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220604161913-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (11.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (4.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20220604161933-5712" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220604161933-5712: exit status 1 (1.1407492s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (3.0669877s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:49.602622    3604 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220604161933-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (4.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (81.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220604162348-5712 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-20220604162348-5712 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m17.5862123s)

                                                
                                                
-- stdout --
	* [newest-cni-20220604162348-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node newest-cni-20220604162348-5712 in cluster newest-cni-20220604162348-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-20220604162348-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:23:48.263402    5440 out.go:296] Setting OutFile to fd 1424 ...
	I0604 16:23:48.329668    5440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:23:48.329668    5440 out.go:309] Setting ErrFile to fd 1692...
	I0604 16:23:48.329743    5440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:23:48.345350    5440 out.go:303] Setting JSON to false
	I0604 16:23:48.349650    5440 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10900,"bootTime":1654348928,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:23:48.349650    5440 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:23:48.353765    5440 out.go:177] * [newest-cni-20220604162348-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:23:48.356897    5440 notify.go:193] Checking for updates...
	I0604 16:23:48.364322    5440 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:23:48.367683    5440 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:23:48.370248    5440 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:23:48.373153    5440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:23:48.380938    5440 config.go:178] Loaded profile config "default-k8s-different-port-20220604162205-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:23:48.381525    5440 config.go:178] Loaded profile config "embed-certs-20220604161913-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:23:48.381525    5440 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:23:48.382241    5440 config.go:178] Loaded profile config "no-preload-20220604161933-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:23:48.382241    5440 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:23:51.123660    5440 docker.go:137] docker version: linux-20.10.16
	I0604 16:23:51.130695    5440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:23:53.185595    5440 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.054733s)
	I0604 16:23:53.186187    5440 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:23:52.1868566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:23:53.190708    5440 out.go:177] * Using the docker driver based on user configuration
	I0604 16:23:53.192791    5440 start.go:284] selected driver: docker
	I0604 16:23:53.192791    5440 start.go:806] validating driver "docker" against <nil>
	I0604 16:23:53.192925    5440 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:23:53.268180    5440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:23:55.325049    5440 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0568465s)
	I0604 16:23:55.325049    5440 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:23:54.3060386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:23:55.325631    5440 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	W0604 16:23:55.325631    5440 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0604 16:23:55.326367    5440 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0604 16:23:55.331031    5440 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:23:55.332696    5440 cni.go:95] Creating CNI manager for ""
	I0604 16:23:55.332696    5440 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:23:55.332696    5440 start_flags.go:306] config:
	{Name:newest-cni-20220604162348-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220604162348-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false}
	I0604 16:23:55.336600    5440 out.go:177] * Starting control plane node newest-cni-20220604162348-5712 in cluster newest-cni-20220604162348-5712
	I0604 16:23:55.339185    5440 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:23:55.341548    5440 out.go:177] * Pulling base image ...
	I0604 16:23:55.343972    5440 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:23:55.344614    5440 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:23:55.344614    5440 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:23:55.344614    5440 cache.go:57] Caching tarball of preloaded images
	I0604 16:23:55.344614    5440 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:23:55.345184    5440 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:23:55.345240    5440 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20220604162348-5712\config.json ...
	I0604 16:23:55.345240    5440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20220604162348-5712\config.json: {Name:mkf0d29d79a85171ec01b6ec4115f24ffc119ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:23:56.447727    5440 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:23:56.447801    5440 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:23:56.447801    5440 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:23:56.447801    5440 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:23:56.447801    5440 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:23:56.448439    5440 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:23:56.448439    5440 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:23:56.448439    5440 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:23:56.448439    5440 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:23:58.798558    5440 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:23:58.798621    5440 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:23:58.798752    5440 start.go:352] acquiring machines lock for newest-cni-20220604162348-5712: {Name:mkbd6394023b53f3734496771860f87f29caa1c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:23:58.799078    5440 start.go:356] acquired machines lock for "newest-cni-20220604162348-5712" in 237µs
	I0604 16:23:58.799283    5440 start.go:91] Provisioning new machine with config: &{Name:newest-cni-20220604162348-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220604162348-5712 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikub
e2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:23:58.799283    5440 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:23:58.802991    5440 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:23:58.803477    5440 start.go:165] libmachine.API.Create for "newest-cni-20220604162348-5712" (driver="docker")
	I0604 16:23:58.803571    5440 client.go:168] LocalClient.Create starting
	I0604 16:23:58.804065    5440 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:23:58.804317    5440 main.go:134] libmachine: Decoding PEM data...
	I0604 16:23:58.804378    5440 main.go:134] libmachine: Parsing certificate...
	I0604 16:23:58.804532    5440 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:23:58.804584    5440 main.go:134] libmachine: Decoding PEM data...
	I0604 16:23:58.804584    5440 main.go:134] libmachine: Parsing certificate...
	I0604 16:23:58.815368    5440 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:23:59.892257    5440 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:23:59.892257    5440 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0768777s)
	I0604 16:23:59.899258    5440 network_create.go:272] running [docker network inspect newest-cni-20220604162348-5712] to gather additional debugging logs...
	I0604 16:23:59.899258    5440 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712
	W0604 16:24:00.994010    5440 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:00.994070    5440 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712: (1.0944619s)
	I0604 16:24:00.994070    5440 network_create.go:275] error running [docker network inspect newest-cni-20220604162348-5712]: docker network inspect newest-cni-20220604162348-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220604162348-5712
	I0604 16:24:00.994070    5440 network_create.go:277] output of [docker network inspect newest-cni-20220604162348-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220604162348-5712
	
	** /stderr **
	I0604 16:24:01.002598    5440 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:24:02.080774    5440 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0780042s)
	I0604 16:24:02.102002    5440 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000608220] misses:0}
	I0604 16:24:02.102287    5440 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:24:02.102365    5440 network_create.go:115] attempt to create docker network newest-cni-20220604162348-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:24:02.109529    5440 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712
	W0604 16:24:03.178086    5440 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:03.178086    5440 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: (1.0685457s)
	E0604 16:24:03.178086    5440 network_create.go:104] error while trying to create docker network newest-cni-20220604162348-5712 192.168.49.0/24: create docker network newest-cni-20220604162348-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8a13fe02a19f40008febd41c1d22d9721c972e943d92aff278dc43cbd516b00d (br-8a13fe02a19f): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:24:03.178086    5440 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220604162348-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8a13fe02a19f40008febd41c1d22d9721c972e943d92aff278dc43cbd516b00d (br-8a13fe02a19f): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220604162348-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8a13fe02a19f40008febd41c1d22d9721c972e943d92aff278dc43cbd516b00d (br-8a13fe02a19f): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:24:03.194047    5440 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:24:04.320287    5440 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1262282s)
	I0604 16:24:04.328301    5440 cli_runner.go:164] Run: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:24:05.440661    5440 cli_runner.go:211] docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:24:05.440661    5440 cli_runner.go:217] Completed: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: (1.1122018s)
	I0604 16:24:05.440661    5440 client.go:171] LocalClient.Create took 6.6370176s
	I0604 16:24:07.453395    5440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:24:07.460230    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:24:08.590273    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:08.590273    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.1300314s)
	I0604 16:24:08.590273    5440 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:08.880748    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:24:09.985149    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:09.985149    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.1043891s)
	W0604 16:24:09.985149    5440 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:24:09.985149    5440 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:09.994152    5440 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:24:10.002147    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:24:11.103294    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:11.103294    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.1011356s)
	I0604 16:24:11.103294    5440 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:11.411584    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:24:12.528255    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:12.528255    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.1166594s)
	W0604 16:24:12.528255    5440 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:24:12.528255    5440 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:12.528255    5440 start.go:134] duration metric: createHost completed in 13.7288224s
	I0604 16:24:12.528255    5440 start.go:81] releasing machines lock for "newest-cni-20220604162348-5712", held for 13.7290063s
	W0604 16:24:12.528255    5440 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	I0604 16:24:12.542228    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:24:13.682743    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:13.682743    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1405018s)
	I0604 16:24:13.682743    5440 delete.go:82] Unable to get host status for newest-cni-20220604162348-5712, assuming it has already been deleted: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	W0604 16:24:13.682743    5440 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	
	I0604 16:24:13.682743    5440 start.go:614] Will try again in 5 seconds ...
	I0604 16:24:18.695885    5440 start.go:352] acquiring machines lock for newest-cni-20220604162348-5712: {Name:mkbd6394023b53f3734496771860f87f29caa1c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:24:18.696280    5440 start.go:356] acquired machines lock for "newest-cni-20220604162348-5712" in 242.2µs
	I0604 16:24:18.696280    5440 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:24:18.696280    5440 fix.go:55] fixHost starting: 
	I0604 16:24:18.713777    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:24:19.796858    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:19.796991    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0829399s)
	I0604 16:24:19.797078    5440 fix.go:103] recreateIfNeeded on newest-cni-20220604162348-5712: state= err=unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:19.797078    5440 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:24:19.815902    5440 out.go:177] * docker "newest-cni-20220604162348-5712" container is missing, will recreate.
	I0604 16:24:19.817883    5440 delete.go:124] DEMOLISHING newest-cni-20220604162348-5712 ...
	I0604 16:24:19.832885    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:24:20.936936    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:20.936936    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1038019s)
	W0604 16:24:20.936936    5440 stop.go:75] unable to get state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:20.936936    5440 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:20.952994    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:24:22.044605    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:22.044663    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0914092s)
	I0604 16:24:22.044775    5440 delete.go:82] Unable to get host status for newest-cni-20220604162348-5712, assuming it has already been deleted: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:22.052589    5440 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220604162348-5712
	W0604 16:24:23.155766    5440 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:23.155766    5440 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220604162348-5712: (1.1031651s)
	I0604 16:24:23.155922    5440 kic.go:356] could not find the container newest-cni-20220604162348-5712 to remove it. will try anyways
	I0604 16:24:23.162653    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:24:24.272400    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:24.272400    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1097347s)
	W0604 16:24:24.272400    5440 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:24.278407    5440 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220604162348-5712 /bin/bash -c "sudo init 0"
	W0604 16:24:25.450413    5440 cli_runner.go:211] docker exec --privileged -t newest-cni-20220604162348-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:24:25.450413    5440 cli_runner.go:217] Completed: docker exec --privileged -t newest-cni-20220604162348-5712 /bin/bash -c "sudo init 0": (1.1709956s)
	I0604 16:24:25.450413    5440 oci.go:625] error shutdown newest-cni-20220604162348-5712: docker exec --privileged -t newest-cni-20220604162348-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:26.466428    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:24:27.570133    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:27.570133    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1036932s)
	I0604 16:24:27.570133    5440 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:27.570133    5440 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:24:27.570133    5440 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:28.051372    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:24:29.132867    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:29.132867    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0814047s)
	I0604 16:24:29.132867    5440 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:29.132867    5440 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:24:29.132867    5440 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:30.039977    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:24:31.106895    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:31.106895    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0669064s)
	I0604 16:24:31.106895    5440 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:31.106895    5440 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:24:31.106895    5440 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:31.764490    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:24:32.902697    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:32.902697    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1381949s)
	I0604 16:24:32.902697    5440 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:32.902697    5440 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:24:32.902697    5440 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:34.024441    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:24:35.123679    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:35.123679    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0992267s)
	I0604 16:24:35.123679    5440 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:35.123679    5440 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:24:35.123679    5440 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:36.651005    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:24:37.746730    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:37.746730    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0957121s)
	I0604 16:24:37.746730    5440 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:37.746730    5440 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:24:37.746730    5440 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:40.804711    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:24:41.921585    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:41.921585    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1168617s)
	I0604 16:24:41.921585    5440 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:41.921585    5440 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:24:41.921585    5440 oci.go:88] couldn't shut down newest-cni-20220604162348-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	 
	I0604 16:24:41.929733    5440 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220604162348-5712
	I0604 16:24:43.041230    5440 cli_runner.go:217] Completed: docker rm -f -v newest-cni-20220604162348-5712: (1.1114459s)
	I0604 16:24:43.050241    5440 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220604162348-5712
	W0604 16:24:44.154301    5440 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:44.154301    5440 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220604162348-5712: (1.104048s)
	I0604 16:24:44.162536    5440 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:24:45.249133    5440 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:24:45.249178    5440 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0864113s)
	I0604 16:24:45.260484    5440 network_create.go:272] running [docker network inspect newest-cni-20220604162348-5712] to gather additional debugging logs...
	I0604 16:24:45.261075    5440 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712
	W0604 16:24:46.368668    5440 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:46.368668    5440 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712: (1.107581s)
	I0604 16:24:46.368668    5440 network_create.go:275] error running [docker network inspect newest-cni-20220604162348-5712]: docker network inspect newest-cni-20220604162348-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220604162348-5712
	I0604 16:24:46.368668    5440 network_create.go:277] output of [docker network inspect newest-cni-20220604162348-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220604162348-5712
	
	** /stderr **
	W0604 16:24:46.370172    5440 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:24:46.370243    5440 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:24:47.378873    5440 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:24:47.386394    5440 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:24:47.386600    5440 start.go:165] libmachine.API.Create for "newest-cni-20220604162348-5712" (driver="docker")
	I0604 16:24:47.386600    5440 client.go:168] LocalClient.Create starting
	I0604 16:24:47.387133    5440 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:24:47.387351    5440 main.go:134] libmachine: Decoding PEM data...
	I0604 16:24:47.387390    5440 main.go:134] libmachine: Parsing certificate...
	I0604 16:24:47.387474    5440 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:24:47.387474    5440 main.go:134] libmachine: Decoding PEM data...
	I0604 16:24:47.387474    5440 main.go:134] libmachine: Parsing certificate...
	I0604 16:24:47.397418    5440 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:24:48.476193    5440 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:24:48.476193    5440 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0786737s)
	I0604 16:24:48.482076    5440 network_create.go:272] running [docker network inspect newest-cni-20220604162348-5712] to gather additional debugging logs...
	I0604 16:24:48.483098    5440 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712
	W0604 16:24:49.535527    5440 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:49.535527    5440 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712: (1.0524182s)
	I0604 16:24:49.535527    5440 network_create.go:275] error running [docker network inspect newest-cni-20220604162348-5712]: docker network inspect newest-cni-20220604162348-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220604162348-5712
	I0604 16:24:49.535527    5440 network_create.go:277] output of [docker network inspect newest-cni-20220604162348-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220604162348-5712
	
	** /stderr **
	I0604 16:24:49.544445    5440 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:24:50.564749    5440 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0202199s)
	I0604 16:24:50.581754    5440 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000608220] amended:false}} dirty:map[] misses:0}
	I0604 16:24:50.581754    5440 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:24:50.597495    5440 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000608220] amended:true}} dirty:map[192.168.49.0:0xc000608220 192.168.58.0:0xc0005205d8] misses:0}
	I0604 16:24:50.597495    5440 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:24:50.597495    5440 network_create.go:115] attempt to create docker network newest-cni-20220604162348-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:24:50.605312    5440 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712
	W0604 16:24:51.659089    5440 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:51.659233    5440 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: (1.0535655s)
	E0604 16:24:51.659233    5440 network_create.go:104] error while trying to create docker network newest-cni-20220604162348-5712 192.168.58.0/24: create docker network newest-cni-20220604162348-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8a5a5ab5630251eea612e9156708d6f5f15fd9efec715669b5282b1e3c661c08 (br-8a5a5ab56302): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:24:51.659233    5440 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220604162348-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8a5a5ab5630251eea612e9156708d6f5f15fd9efec715669b5282b1e3c661c08 (br-8a5a5ab56302): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220604162348-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8a5a5ab5630251eea612e9156708d6f5f15fd9efec715669b5282b1e3c661c08 (br-8a5a5ab56302): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:24:51.678018    5440 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:24:52.743501    5440 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0654706s)
	I0604 16:24:52.750994    5440 cli_runner.go:164] Run: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:24:53.813484    5440 cli_runner.go:211] docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:24:53.813484    5440 cli_runner.go:217] Completed: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: (1.062423s)
	I0604 16:24:53.813484    5440 client.go:171] LocalClient.Create took 6.426814s
	I0604 16:24:55.832094    5440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:24:55.840427    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:24:56.843152    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:56.843152    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0027141s)
	I0604 16:24:56.843152    5440 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:57.183642    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:24:58.263753    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:58.263753    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0800987s)
	W0604 16:24:58.263753    5440 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:24:58.263753    5440 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:58.267754    5440 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:24:58.282364    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:24:59.356236    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:24:59.356236    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0738597s)
	I0604 16:24:59.356236    5440 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:24:59.588573    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:25:00.692228    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:25:00.692228    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.1036426s)
	W0604 16:25:00.692228    5440 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:25:00.692228    5440 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:00.692228    5440 start.go:134] duration metric: createHost completed in 13.3132089s
	I0604 16:25:00.702228    5440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:25:00.710542    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:25:01.806570    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:25:01.806570    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0959474s)
	I0604 16:25:01.806570    5440 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:02.067950    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:25:03.125321    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:25:03.125321    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0573602s)
	W0604 16:25:03.125321    5440 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:25:03.125321    5440 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:03.137200    5440 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:25:03.144019    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:25:04.228487    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:25:04.228660    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0844557s)
	I0604 16:25:04.228893    5440 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:04.440198    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:25:05.557355    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:25:05.557355    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.1171445s)
	W0604 16:25:05.557355    5440 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:25:05.557355    5440 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:05.557355    5440 fix.go:57] fixHost completed within 46.8605633s
	I0604 16:25:05.557355    5440 start.go:81] releasing machines lock for "newest-cni-20220604162348-5712", held for 46.8605633s
	W0604 16:25:05.557355    5440 out.go:239] * Failed to start docker container. Running "minikube delete -p newest-cni-20220604162348-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p newest-cni-20220604162348-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	
	I0604 16:25:05.563363    5440 out.go:177] 
	W0604 16:25:05.565341    5440 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	
	W0604 16:25:05.565341    5440 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:25:05.565341    5440 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:25:05.569340    5440 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p newest-cni-20220604162348-5712 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220604162348-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220604162348-5712: exit status 1 (1.1493624s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712: exit status 7 (2.9614954s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:25:09.783901    8948 status.go:247] status error: host: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220604162348-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (81.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20220604161933-5712" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220604161933-5712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context no-preload-20220604161933-5712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (244.9764ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220604161933-5712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-20220604161933-5712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220604161933-5712: exit status 1 (1.1806144s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (3.0090108s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:23:54.037884    7632 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220604161933-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (4.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (7.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20220604161933-5712 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p no-preload-20220604161933-5712 "sudo crictl images -o json": exit status 80 (3.3207364s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_6.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p no-preload-20220604161933-5712 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220604161933-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220604161933-5712: exit status 1 (1.1148903s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (3.0218302s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:24:01.515677    1964 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220604161933-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (7.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (11.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20220604161933-5712 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p no-preload-20220604161933-5712 --alsologtostderr -v=1: exit status 80 (3.296597s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:24:01.774147    4996 out.go:296] Setting OutFile to fd 2012 ...
	I0604 16:24:01.834275    4996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:24:01.834275    4996 out.go:309] Setting ErrFile to fd 1640...
	I0604 16:24:01.834275    4996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:24:01.844459    4996 out.go:303] Setting JSON to false
	I0604 16:24:01.844459    4996 mustload.go:65] Loading cluster: no-preload-20220604161933-5712
	I0604 16:24:01.845618    4996 config.go:178] Loaded profile config "no-preload-20220604161933-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:24:01.879463    4996 cli_runner.go:164] Run: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}
	W0604 16:24:04.541170    4996 cli_runner.go:211] docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:04.541170    4996 cli_runner.go:217] Completed: docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: (2.6616043s)
	I0604 16:24:04.545648    4996 out.go:177] 
	W0604 16:24:04.547596    4996 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712
	
	W0604 16:24:04.547596    4996 out.go:239] * 
	* 
	W0604 16:24:04.809902    4996 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_12.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_12.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 16:24:04.812864    4996 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p no-preload-20220604161933-5712 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220604161933-5712: exit status 1 (1.218418s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (3.0516916s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:24:09.091331    9144 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220604161933-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220604161933-5712: exit status 1 (1.1536754s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220604161933-5712 -n no-preload-20220604161933-5712: exit status 7 (3.0390986s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:24:13.308845    7260 status.go:247] status error: host: state: unknown state "no-preload-20220604161933-5712": docker container inspect no-preload-20220604161933-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220604161933-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220604161933-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (11.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (10.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (3.0087794s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:24:12.654438    4364 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220604162205-5712 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220604162205-5712 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0202909s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220604162205-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220604162205-5712: exit status 1 (1.1542022s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (3.0044481s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:24:19.843882    8484 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220604162205-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20220604161352-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p auto-20220604161352-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: exit status 60 (1m17.4376237s)

                                                
                                                
-- stdout --
	* [auto-20220604161352-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node auto-20220604161352-5712 in cluster auto-20220604161352-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "auto-20220604161352-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:24:11.321754    3668 out.go:296] Setting OutFile to fd 2044 ...
	I0604 16:24:11.380289    3668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:24:11.380289    3668 out.go:309] Setting ErrFile to fd 2040...
	I0604 16:24:11.380289    3668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:24:11.393080    3668 out.go:303] Setting JSON to false
	I0604 16:24:11.396262    3668 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10923,"bootTime":1654348928,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:24:11.396262    3668 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:24:11.408728    3668 out.go:177] * [auto-20220604161352-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:24:11.412203    3668 notify.go:193] Checking for updates...
	I0604 16:24:11.413830    3668 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:24:11.416784    3668 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:24:11.419442    3668 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:24:11.422268    3668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:24:11.426605    3668 config.go:178] Loaded profile config "default-k8s-different-port-20220604162205-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:24:11.426605    3668 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:24:11.427740    3668 config.go:178] Loaded profile config "newest-cni-20220604162348-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:24:11.428889    3668 config.go:178] Loaded profile config "no-preload-20220604161933-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:24:11.428889    3668 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:24:14.173832    3668 docker.go:137] docker version: linux-20.10.16
	I0604 16:24:14.182183    3668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:24:16.289536    3668 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.107255s)
	I0604 16:24:16.290243    3668 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:24:15.2584921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:24:16.294001    3668 out.go:177] * Using the docker driver based on user configuration
	I0604 16:24:16.296321    3668 start.go:284] selected driver: docker
	I0604 16:24:16.296321    3668 start.go:806] validating driver "docker" against <nil>
	I0604 16:24:16.296321    3668 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:24:16.367977    3668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:24:18.430864    3668 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0628645s)
	I0604 16:24:18.430864    3668 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:24:17.4403427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:24:18.430864    3668 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:24:18.435855    3668 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:24:18.438858    3668 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:24:18.440868    3668 cni.go:95] Creating CNI manager for ""
	I0604 16:24:18.440868    3668 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:24:18.440868    3668 start_flags.go:306] config:
	{Name:auto-20220604161352-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:auto-20220604161352-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:24:18.442869    3668 out.go:177] * Starting control plane node auto-20220604161352-5712 in cluster auto-20220604161352-5712
	I0604 16:24:18.446853    3668 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:24:18.450860    3668 out.go:177] * Pulling base image ...
	I0604 16:24:18.453849    3668 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:24:18.453849    3668 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:24:18.453849    3668 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:24:18.453849    3668 cache.go:57] Caching tarball of preloaded images
	I0604 16:24:18.453849    3668 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:24:18.453849    3668 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:24:18.453849    3668 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-20220604161352-5712\config.json ...
	I0604 16:24:18.454856    3668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-20220604161352-5712\config.json: {Name:mk96b6f030086090564950a436a1a38b12f2632b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:24:19.546899    3668 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:24:19.546899    3668 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:24:19.546899    3668 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:24:19.546899    3668 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:24:19.546899    3668 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:24:19.546899    3668 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:24:19.546899    3668 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:24:19.546899    3668 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:24:19.546899    3668 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:24:21.917004    3668 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:24:21.917023    3668 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:24:21.917191    3668 start.go:352] acquiring machines lock for auto-20220604161352-5712: {Name:mk8cad24188796ff284d58d94b34ce1955e6ffb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:24:21.917352    3668 start.go:356] acquired machines lock for "auto-20220604161352-5712" in 161.3µs
	I0604 16:24:21.917619    3668 start.go:91] Provisioning new machine with config: &{Name:auto-20220604161352-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:auto-20220604161352-5712 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:24:21.917741    3668 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:24:21.928672    3668 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:24:21.929393    3668 start.go:165] libmachine.API.Create for "auto-20220604161352-5712" (driver="docker")
	I0604 16:24:21.929500    3668 client.go:168] LocalClient.Create starting
	I0604 16:24:21.929988    3668 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:24:21.930119    3668 main.go:134] libmachine: Decoding PEM data...
	I0604 16:24:21.930119    3668 main.go:134] libmachine: Parsing certificate...
	I0604 16:24:21.930119    3668 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:24:21.930119    3668 main.go:134] libmachine: Decoding PEM data...
	I0604 16:24:21.930119    3668 main.go:134] libmachine: Parsing certificate...
	I0604 16:24:21.940116    3668 cli_runner.go:164] Run: docker network inspect auto-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:24:23.063794    3668 cli_runner.go:211] docker network inspect auto-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:24:23.063794    3668 cli_runner.go:217] Completed: docker network inspect auto-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1236655s)
	I0604 16:24:23.070757    3668 network_create.go:272] running [docker network inspect auto-20220604161352-5712] to gather additional debugging logs...
	I0604 16:24:23.070757    3668 cli_runner.go:164] Run: docker network inspect auto-20220604161352-5712
	W0604 16:24:24.165056    3668 cli_runner.go:211] docker network inspect auto-20220604161352-5712 returned with exit code 1
	I0604 16:24:24.165056    3668 cli_runner.go:217] Completed: docker network inspect auto-20220604161352-5712: (1.0942872s)
	I0604 16:24:24.165056    3668 network_create.go:275] error running [docker network inspect auto-20220604161352-5712]: docker network inspect auto-20220604161352-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220604161352-5712
	I0604 16:24:24.165056    3668 network_create.go:277] output of [docker network inspect auto-20220604161352-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220604161352-5712
	
	** /stderr **
	I0604 16:24:24.174722    3668 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:24:25.278843    3668 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1041088s)
	I0604 16:24:25.299834    3668 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000620088] misses:0}
	I0604 16:24:25.299834    3668 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:24:25.299834    3668 network_create.go:115] attempt to create docker network auto-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:24:25.306867    3668 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220604161352-5712
	W0604 16:24:26.410627    3668 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220604161352-5712 returned with exit code 1
	I0604 16:24:26.432636    3668 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220604161352-5712: (1.1037479s)
	E0604 16:24:26.433493    3668 network_create.go:104] error while trying to create docker network auto-20220604161352-5712 192.168.49.0/24: create docker network auto-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 60aaae08e8c236e3e6f2780ae41855d2bdcb1f4c1f9254ae083e5052af2ab16a (br-60aaae08e8c2): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:24:26.433564    3668 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 60aaae08e8c236e3e6f2780ae41855d2bdcb1f4c1f9254ae083e5052af2ab16a (br-60aaae08e8c2): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 60aaae08e8c236e3e6f2780ae41855d2bdcb1f4c1f9254ae083e5052af2ab16a (br-60aaae08e8c2): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:24:26.450288    3668 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:24:27.585207    3668 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.134907s)
	I0604 16:24:27.594942    3668 cli_runner.go:164] Run: docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:24:28.691333    3668 cli_runner.go:211] docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:24:28.691333    3668 cli_runner.go:217] Completed: docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0963789s)
	I0604 16:24:28.691333    3668 client.go:171] LocalClient.Create took 6.7617589s
	I0604 16:24:30.707150    3668 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:24:30.714445    3668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712
	W0604 16:24:31.816375    3668 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712 returned with exit code 1
	I0604 16:24:31.816523    3668 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: (1.1019174s)
	I0604 16:24:31.816577    3668 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:32.107730    3668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712
	W0604 16:24:33.217910    3668 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712 returned with exit code 1
	I0604 16:24:33.217910    3668 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: (1.1101685s)
	W0604 16:24:33.217910    3668 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	
	W0604 16:24:33.217910    3668 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:33.227931    3668 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:24:33.234929    3668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712
	W0604 16:24:34.342252    3668 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712 returned with exit code 1
	I0604 16:24:34.342252    3668 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: (1.107032s)
	I0604 16:24:34.342252    3668 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:34.653600    3668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712
	W0604 16:24:35.773381    3668 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712 returned with exit code 1
	I0604 16:24:35.773624    3668 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: (1.1197687s)
	W0604 16:24:35.773776    3668 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	
	W0604 16:24:35.773849    3668 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:35.773849    3668 start.go:134] duration metric: createHost completed in 13.8559578s
	I0604 16:24:35.773922    3668 start.go:81] releasing machines lock for "auto-20220604161352-5712", held for 13.8563802s
	W0604 16:24:35.774114    3668 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for auto-20220604161352-5712 container: docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/auto-20220604161352-5712': mkdir /var/lib/docker/volumes/auto-20220604161352-5712: read-only file system
	I0604 16:24:35.788972    3668 cli_runner.go:164] Run: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}
	W0604 16:24:36.889276    3668 cli_runner.go:211] docker container inspect auto-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:36.889276    3668 cli_runner.go:217] Completed: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: (1.1002923s)
	I0604 16:24:36.889276    3668 delete.go:82] Unable to get host status for auto-20220604161352-5712, assuming it has already been deleted: state: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	W0604 16:24:36.889276    3668 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for auto-20220604161352-5712 container: docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/auto-20220604161352-5712': mkdir /var/lib/docker/volumes/auto-20220604161352-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for auto-20220604161352-5712 container: docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/auto-20220604161352-5712': mkdir /var/lib/docker/volumes/auto-20220604161352-5712: read-only file system
	
	I0604 16:24:36.889276    3668 start.go:614] Will try again in 5 seconds ...
	I0604 16:24:41.890322    3668 start.go:352] acquiring machines lock for auto-20220604161352-5712: {Name:mk8cad24188796ff284d58d94b34ce1955e6ffb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:24:41.890557    3668 start.go:356] acquired machines lock for "auto-20220604161352-5712" in 166µs
	I0604 16:24:41.890643    3668 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:24:41.890643    3668 fix.go:55] fixHost starting: 
	I0604 16:24:41.909035    3668 cli_runner.go:164] Run: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}
	W0604 16:24:43.010639    3668 cli_runner.go:211] docker container inspect auto-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:43.010862    3668 cli_runner.go:217] Completed: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: (1.1015914s)
	I0604 16:24:43.011099    3668 fix.go:103] recreateIfNeeded on auto-20220604161352-5712: state= err=unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:43.011118    3668 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:24:43.014899    3668 out.go:177] * docker "auto-20220604161352-5712" container is missing, will recreate.
	I0604 16:24:43.017604    3668 delete.go:124] DEMOLISHING auto-20220604161352-5712 ...
	I0604 16:24:43.034288    3668 cli_runner.go:164] Run: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}
	W0604 16:24:44.136272    3668 cli_runner.go:211] docker container inspect auto-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:44.136272    3668 cli_runner.go:217] Completed: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: (1.1019728s)
	W0604 16:24:44.136272    3668 stop.go:75] unable to get state: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:44.136272    3668 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:44.152275    3668 cli_runner.go:164] Run: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}
	W0604 16:24:45.264492    3668 cli_runner.go:211] docker container inspect auto-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:45.264492    3668 cli_runner.go:217] Completed: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: (1.1122051s)
	I0604 16:24:45.264492    3668 delete.go:82] Unable to get host status for auto-20220604161352-5712, assuming it has already been deleted: state: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:45.271742    3668 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220604161352-5712
	W0604 16:24:46.353285    3668 cli_runner.go:211] docker container inspect -f {{.Id}} auto-20220604161352-5712 returned with exit code 1
	I0604 16:24:46.353285    3668 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} auto-20220604161352-5712: (1.0814307s)
	I0604 16:24:46.353352    3668 kic.go:356] could not find the container auto-20220604161352-5712 to remove it. will try anyways
	I0604 16:24:46.362003    3668 cli_runner.go:164] Run: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}
	W0604 16:24:47.393986    3668 cli_runner.go:211] docker container inspect auto-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:47.393986    3668 cli_runner.go:217] Completed: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: (1.0319332s)
	W0604 16:24:47.393986    3668 oci.go:84] error getting container status, will try to delete anyways: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:47.403421    3668 cli_runner.go:164] Run: docker exec --privileged -t auto-20220604161352-5712 /bin/bash -c "sudo init 0"
	W0604 16:24:48.476193    3668 cli_runner.go:211] docker exec --privileged -t auto-20220604161352-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:24:48.476193    3668 cli_runner.go:217] Completed: docker exec --privileged -t auto-20220604161352-5712 /bin/bash -c "sudo init 0": (1.0727608s)
	I0604 16:24:48.476193    3668 oci.go:625] error shutdown auto-20220604161352-5712: docker exec --privileged -t auto-20220604161352-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:49.499626    3668 cli_runner.go:164] Run: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}
	W0604 16:24:50.580763    3668 cli_runner.go:211] docker container inspect auto-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:50.580763    3668 cli_runner.go:217] Completed: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: (1.081125s)
	I0604 16:24:50.580763    3668 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:50.580763    3668 oci.go:639] temporary error: container auto-20220604161352-5712 status is  but expect it to be exited
	I0604 16:24:50.580763    3668 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:51.064169    3668 cli_runner.go:164] Run: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}
	W0604 16:24:52.177694    3668 cli_runner.go:211] docker container inspect auto-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:52.177694    3668 cli_runner.go:217] Completed: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: (1.1135127s)
	I0604 16:24:52.177694    3668 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:52.177694    3668 oci.go:639] temporary error: container auto-20220604161352-5712 status is  but expect it to be exited
	I0604 16:24:52.177694    3668 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:53.081872    3668 cli_runner.go:164] Run: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}
	W0604 16:24:54.171636    3668 cli_runner.go:211] docker container inspect auto-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:54.171636    3668 cli_runner.go:217] Completed: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: (1.0897515s)
	I0604 16:24:54.171636    3668 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:54.171636    3668 oci.go:639] temporary error: container auto-20220604161352-5712 status is  but expect it to be exited
	I0604 16:24:54.171636    3668 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:54.818939    3668 cli_runner.go:164] Run: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}
	W0604 16:24:55.865953    3668 cli_runner.go:211] docker container inspect auto-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:55.866055    3668 cli_runner.go:217] Completed: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: (1.0459581s)
	I0604 16:24:55.866161    3668 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:55.866191    3668 oci.go:639] temporary error: container auto-20220604161352-5712 status is  but expect it to be exited
	I0604 16:24:55.866191    3668 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:56.992988    3668 cli_runner.go:164] Run: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}
	W0604 16:24:58.044588    3668 cli_runner.go:211] docker container inspect auto-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:58.044588    3668 cli_runner.go:217] Completed: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: (1.051534s)
	I0604 16:24:58.044588    3668 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:58.044588    3668 oci.go:639] temporary error: container auto-20220604161352-5712 status is  but expect it to be exited
	I0604 16:24:58.044588    3668 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:24:59.570377    3668 cli_runner.go:164] Run: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}
	W0604 16:25:00.677228    3668 cli_runner.go:211] docker container inspect auto-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:00.677228    3668 cli_runner.go:217] Completed: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: (1.106839s)
	I0604 16:25:00.677228    3668 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:25:00.677228    3668 oci.go:639] temporary error: container auto-20220604161352-5712 status is  but expect it to be exited
	I0604 16:25:00.677228    3668 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:25:03.734272    3668 cli_runner.go:164] Run: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}
	W0604 16:25:04.870103    3668 cli_runner.go:211] docker container inspect auto-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:04.870318    3668 cli_runner.go:217] Completed: docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: (1.1358186s)
	I0604 16:25:04.870478    3668 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:25:04.870510    3668 oci.go:639] temporary error: container auto-20220604161352-5712 status is  but expect it to be exited
	I0604 16:25:04.870572    3668 oci.go:88] couldn't shut down auto-20220604161352-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "auto-20220604161352-5712": docker container inspect auto-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	 
	I0604 16:25:04.879005    3668 cli_runner.go:164] Run: docker rm -f -v auto-20220604161352-5712
	I0604 16:25:06.042385    3668 cli_runner.go:217] Completed: docker rm -f -v auto-20220604161352-5712: (1.1633675s)
	I0604 16:25:06.050270    3668 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220604161352-5712
	W0604 16:25:07.150815    3668 cli_runner.go:211] docker container inspect -f {{.Id}} auto-20220604161352-5712 returned with exit code 1
	I0604 16:25:07.150815    3668 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} auto-20220604161352-5712: (1.1005329s)
	I0604 16:25:07.158825    3668 cli_runner.go:164] Run: docker network inspect auto-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:25:08.234309    3668 cli_runner.go:211] docker network inspect auto-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:25:08.234309    3668 cli_runner.go:217] Completed: docker network inspect auto-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0754715s)
	I0604 16:25:08.242375    3668 network_create.go:272] running [docker network inspect auto-20220604161352-5712] to gather additional debugging logs...
	I0604 16:25:08.242375    3668 cli_runner.go:164] Run: docker network inspect auto-20220604161352-5712
	W0604 16:25:09.311942    3668 cli_runner.go:211] docker network inspect auto-20220604161352-5712 returned with exit code 1
	I0604 16:25:09.311942    3668 cli_runner.go:217] Completed: docker network inspect auto-20220604161352-5712: (1.0695555s)
	I0604 16:25:09.311942    3668 network_create.go:275] error running [docker network inspect auto-20220604161352-5712]: docker network inspect auto-20220604161352-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220604161352-5712
	I0604 16:25:09.311942    3668 network_create.go:277] output of [docker network inspect auto-20220604161352-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220604161352-5712
	
	** /stderr **
	W0604 16:25:09.313017    3668 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:25:09.313017    3668 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:25:10.317380    3668 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:25:10.323343    3668 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:25:10.324072    3668 start.go:165] libmachine.API.Create for "auto-20220604161352-5712" (driver="docker")
	I0604 16:25:10.324207    3668 client.go:168] LocalClient.Create starting
	I0604 16:25:10.325067    3668 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:25:10.325440    3668 main.go:134] libmachine: Decoding PEM data...
	I0604 16:25:10.325518    3668 main.go:134] libmachine: Parsing certificate...
	I0604 16:25:10.325837    3668 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:25:10.326093    3668 main.go:134] libmachine: Decoding PEM data...
	I0604 16:25:10.326093    3668 main.go:134] libmachine: Parsing certificate...
	I0604 16:25:10.335633    3668 cli_runner.go:164] Run: docker network inspect auto-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:25:11.364313    3668 cli_runner.go:211] docker network inspect auto-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:25:11.364313    3668 cli_runner.go:217] Completed: docker network inspect auto-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0286693s)
	I0604 16:25:11.370298    3668 network_create.go:272] running [docker network inspect auto-20220604161352-5712] to gather additional debugging logs...
	I0604 16:25:11.371296    3668 cli_runner.go:164] Run: docker network inspect auto-20220604161352-5712
	W0604 16:25:12.448046    3668 cli_runner.go:211] docker network inspect auto-20220604161352-5712 returned with exit code 1
	I0604 16:25:12.448046    3668 cli_runner.go:217] Completed: docker network inspect auto-20220604161352-5712: (1.076738s)
	I0604 16:25:12.448046    3668 network_create.go:275] error running [docker network inspect auto-20220604161352-5712]: docker network inspect auto-20220604161352-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220604161352-5712
	I0604 16:25:12.448046    3668 network_create.go:277] output of [docker network inspect auto-20220604161352-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220604161352-5712
	
	** /stderr **
	I0604 16:25:12.455054    3668 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:25:13.576300    3668 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1212342s)
	I0604 16:25:13.594539    3668 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000620088] amended:false}} dirty:map[] misses:0}
	I0604 16:25:13.594734    3668 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:25:13.611759    3668 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000620088] amended:true}} dirty:map[192.168.49.0:0xc000620088 192.168.58.0:0xc000006490] misses:0}
	I0604 16:25:13.612276    3668 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:25:13.612276    3668 network_create.go:115] attempt to create docker network auto-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:25:13.620591    3668 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220604161352-5712
	W0604 16:25:14.710766    3668 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220604161352-5712 returned with exit code 1
	I0604 16:25:14.710951    3668 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220604161352-5712: (1.0899304s)
	E0604 16:25:14.710951    3668 network_create.go:104] error while trying to create docker network auto-20220604161352-5712 192.168.58.0/24: create docker network auto-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 598db27457ad9af024d26245a2adb1bbd5799237f6c44e0d6d3e376aa7de043c (br-598db27457ad): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:25:14.711219    3668 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 598db27457ad9af024d26245a2adb1bbd5799237f6c44e0d6d3e376aa7de043c (br-598db27457ad): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 598db27457ad9af024d26245a2adb1bbd5799237f6c44e0d6d3e376aa7de043c (br-598db27457ad): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:25:14.725118    3668 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:25:15.774787    3668 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0496583s)
	I0604 16:25:15.782220    3668 cli_runner.go:164] Run: docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:25:16.872382    3668 cli_runner.go:211] docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:25:16.872718    3668 cli_runner.go:217] Completed: docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0900805s)
	I0604 16:25:16.872798    3668 client.go:171] LocalClient.Create took 6.5485194s
	I0604 16:25:18.890430    3668 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:25:18.898562    3668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712
	W0604 16:25:19.983947    3668 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712 returned with exit code 1
	I0604 16:25:19.983991    3668 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: (1.085207s)
	I0604 16:25:19.984119    3668 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:25:20.323522    3668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712
	W0604 16:25:21.392213    3668 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712 returned with exit code 1
	I0604 16:25:21.392621    3668 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: (1.0686795s)
	W0604 16:25:21.392621    3668 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	
	W0604 16:25:21.392621    3668 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:25:21.404122    3668 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:25:21.412786    3668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712
	W0604 16:25:22.440449    3668 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712 returned with exit code 1
	I0604 16:25:22.440449    3668 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: (1.0276517s)
	I0604 16:25:22.440449    3668 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:25:22.671331    3668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712
	W0604 16:25:23.734079    3668 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712 returned with exit code 1
	I0604 16:25:23.734132    3668 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: (1.062629s)
	W0604 16:25:23.734132    3668 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	
	W0604 16:25:23.734132    3668 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:25:23.734132    3668 start.go:134] duration metric: createHost completed in 13.416325s
	I0604 16:25:23.750539    3668 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:25:23.757539    3668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712
	W0604 16:25:24.793070    3668 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712 returned with exit code 1
	I0604 16:25:24.793070    3668 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: (1.0352769s)
	I0604 16:25:24.793469    3668 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:25:25.053425    3668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712
	W0604 16:25:26.089727    3668 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712 returned with exit code 1
	I0604 16:25:26.089727    3668 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: (1.0357593s)
	W0604 16:25:26.090156    3668 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	
	W0604 16:25:26.090206    3668 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:25:26.100173    3668 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:25:26.106338    3668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712
	W0604 16:25:27.186034    3668 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712 returned with exit code 1
	I0604 16:25:27.186104    3668 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: (1.079647s)
	I0604 16:25:27.186278    3668 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:25:27.396033    3668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712
	W0604 16:25:28.462847    3668 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712 returned with exit code 1
	I0604 16:25:28.462920    3668 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: (1.0665939s)
	W0604 16:25:28.462920    3668 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	
	W0604 16:25:28.462920    3668 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220604161352-5712
	I0604 16:25:28.462920    3668 fix.go:57] fixHost completed within 46.5717657s
	I0604 16:25:28.462920    3668 start.go:81] releasing machines lock for "auto-20220604161352-5712", held for 46.571801s
	W0604 16:25:28.463589    3668 out.go:239] * Failed to start docker container. Running "minikube delete -p auto-20220604161352-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220604161352-5712 container: docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/auto-20220604161352-5712': mkdir /var/lib/docker/volumes/auto-20220604161352-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p auto-20220604161352-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220604161352-5712 container: docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/auto-20220604161352-5712': mkdir /var/lib/docker/volumes/auto-20220604161352-5712: read-only file system
	
	I0604 16:25:28.469421    3668 out.go:177] 
	W0604 16:25:28.472084    3668 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220604161352-5712 container: docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/auto-20220604161352-5712': mkdir /var/lib/docker/volumes/auto-20220604161352-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220604161352-5712 container: docker volume create auto-20220604161352-5712 --label name.minikube.sigs.k8s.io=auto-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/auto-20220604161352-5712': mkdir /var/lib/docker/volumes/auto-20220604161352-5712: read-only file system
	
	W0604 16:25:28.472084    3668 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:25:28.472084    3668 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:25:28.475442    3668 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/auto/Start (77.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (118.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220604162205-5712 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220604162205-5712 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m54.6196804s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220604162205-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-different-port-20220604162205-5712 in cluster default-k8s-different-port-20220604162205-5712
	* Pulling base image ...
	* docker "default-k8s-different-port-20220604162205-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-different-port-20220604162205-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:24:20.119758    6164 out.go:296] Setting OutFile to fd 1684 ...
	I0604 16:24:20.181585    6164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:24:20.181585    6164 out.go:309] Setting ErrFile to fd 1588...
	I0604 16:24:20.181585    6164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:24:20.193421    6164 out.go:303] Setting JSON to false
	I0604 16:24:20.196742    6164 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10932,"bootTime":1654348928,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:24:20.196883    6164 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:24:20.199802    6164 out.go:177] * [default-k8s-different-port-20220604162205-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:24:20.203567    6164 notify.go:193] Checking for updates...
	I0604 16:24:20.205210    6164 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:24:20.207919    6164 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:24:20.210565    6164 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:24:20.212730    6164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:24:20.215223    6164 config.go:178] Loaded profile config "default-k8s-different-port-20220604162205-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:24:20.216424    6164 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:24:22.956110    6164 docker.go:137] docker version: linux-20.10.16
	I0604 16:24:22.964056    6164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:24:25.124383    6164 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.160304s)
	I0604 16:24:25.125487    6164 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:24:24.0460381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:24:25.168166    6164 out.go:177] * Using the docker driver based on existing profile
	I0604 16:24:25.173339    6164 start.go:284] selected driver: docker
	I0604 16:24:25.173339    6164 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220604162205-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220604162205-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] List
enAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:24:25.173702    6164 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:24:25.254834    6164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:24:27.382699    6164 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1277887s)
	I0604 16:24:27.382930    6164 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:24:26.3389207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:24:27.382930    6164 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:24:27.382930    6164 cni.go:95] Creating CNI manager for ""
	I0604 16:24:27.382930    6164 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:24:27.382930    6164 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220604162205-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220604162205-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:24:27.437972    6164 out.go:177] * Starting control plane node default-k8s-different-port-20220604162205-5712 in cluster default-k8s-different-port-20220604162205-5712
	I0604 16:24:27.441978    6164 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:24:27.445975    6164 out.go:177] * Pulling base image ...
	I0604 16:24:27.447973    6164 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:24:27.447973    6164 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:24:27.447973    6164 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:24:27.447973    6164 cache.go:57] Caching tarball of preloaded images
	I0604 16:24:27.447973    6164 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:24:27.447973    6164 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:24:27.448983    6164 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20220604162205-5712\config.json ...
	I0604 16:24:28.520543    6164 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:24:28.520543    6164 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:24:28.520894    6164 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:24:28.520965    6164 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:24:28.520965    6164 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:24:28.520965    6164 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:24:28.520965    6164 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:24:28.520965    6164 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:24:28.520965    6164 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:24:30.881397    6164 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:24:30.881514    6164 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:24:30.881622    6164 start.go:352] acquiring machines lock for default-k8s-different-port-20220604162205-5712: {Name:mka7c4079f67ca8a42486acaf1dd6d7206313e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:24:30.881667    6164 start.go:356] acquired machines lock for "default-k8s-different-port-20220604162205-5712" in 0s
	I0604 16:24:30.881667    6164 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:24:30.881667    6164 fix.go:55] fixHost starting: 
	I0604 16:24:30.897760    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:32.007318    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:32.007370    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.1092287s)
	I0604 16:24:32.007432    6164 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220604162205-5712: state= err=unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:32.007432    6164 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:24:32.011016    6164 out.go:177] * docker "default-k8s-different-port-20220604162205-5712" container is missing, will recreate.
	I0604 16:24:32.012810    6164 delete.go:124] DEMOLISHING default-k8s-different-port-20220604162205-5712 ...
	I0604 16:24:32.034512    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:33.124273    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:33.124273    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0897489s)
	W0604 16:24:33.124273    6164 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:33.124273    6164 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:33.139273    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:34.232207    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:34.232207    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0929228s)
	I0604 16:24:34.232207    6164 delete.go:82] Unable to get host status for default-k8s-different-port-20220604162205-5712, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:34.239195    6164 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712
	W0604 16:24:35.388400    6164 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:24:35.388400    6164 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712: (1.1490574s)
	I0604 16:24:35.388511    6164 kic.go:356] could not find the container default-k8s-different-port-20220604162205-5712 to remove it. will try anyways
	I0604 16:24:35.399495    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:36.481556    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:36.481556    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0820494s)
	W0604 16:24:36.481556    6164 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:36.489275    6164 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-different-port-20220604162205-5712 /bin/bash -c "sudo init 0"
	W0604 16:24:37.591776    6164 cli_runner.go:211] docker exec --privileged -t default-k8s-different-port-20220604162205-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:24:37.591776    6164 cli_runner.go:217] Completed: docker exec --privileged -t default-k8s-different-port-20220604162205-5712 /bin/bash -c "sudo init 0": (1.1024891s)
	I0604 16:24:37.591776    6164 oci.go:625] error shutdown default-k8s-different-port-20220604162205-5712: docker exec --privileged -t default-k8s-different-port-20220604162205-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:38.615257    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:39.690172    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:39.690172    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0749034s)
	I0604 16:24:39.690172    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:39.690172    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:24:39.690172    6164 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:40.261890    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:41.345538    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:41.345538    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.083449s)
	I0604 16:24:41.345779    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:41.345779    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:24:41.345779    6164 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:42.437722    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:43.570645    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:43.570783    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.1328424s)
	I0604 16:24:43.570849    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:43.570913    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:24:43.570979    6164 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:44.900509    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:45.980613    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:45.980613    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0800924s)
	I0604 16:24:45.980613    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:45.980613    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:24:45.980613    6164 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:47.588762    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:48.662894    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:48.662894    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0741197s)
	I0604 16:24:48.662894    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:48.662894    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:24:48.662894    6164 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:51.019630    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:52.099275    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:52.099275    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0796329s)
	I0604 16:24:52.099275    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:52.099275    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:24:52.099275    6164 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:56.627475    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:24:57.668779    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:57.668779    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0412923s)
	I0604 16:24:57.668779    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:24:57.668779    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:24:57.668779    6164 oci.go:88] couldn't shut down default-k8s-different-port-20220604162205-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	 
	I0604 16:24:57.678516    6164 cli_runner.go:164] Run: docker rm -f -v default-k8s-different-port-20220604162205-5712
	I0604 16:24:58.786448    6164 cli_runner.go:217] Completed: docker rm -f -v default-k8s-different-port-20220604162205-5712: (1.1079201s)
	I0604 16:24:58.792460    6164 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712
	W0604 16:24:59.841431    6164 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:24:59.841680    6164 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712: (1.0489588s)
	I0604 16:24:59.850058    6164 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:25:00.954507    6164 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:25:00.954507    6164 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1044375s)
	I0604 16:25:00.963389    6164 network_create.go:272] running [docker network inspect default-k8s-different-port-20220604162205-5712] to gather additional debugging logs...
	I0604 16:25:00.963389    6164 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712
	W0604 16:25:02.057070    6164 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:02.057070    6164 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712: (1.0936689s)
	I0604 16:25:02.057070    6164 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220604162205-5712]: docker network inspect default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220604162205-5712
	I0604 16:25:02.057070    6164 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220604162205-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220604162205-5712
	
	** /stderr **
	W0604 16:25:02.058217    6164 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:25:02.058217    6164 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:25:03.064489    6164 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:25:03.068483    6164 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:25:03.069302    6164 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220604162205-5712" (driver="docker")
	I0604 16:25:03.069391    6164 client.go:168] LocalClient.Create starting
	I0604 16:25:03.069961    6164 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:25:03.070259    6164 main.go:134] libmachine: Decoding PEM data...
	I0604 16:25:03.070301    6164 main.go:134] libmachine: Parsing certificate...
	I0604 16:25:03.070497    6164 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:25:03.070789    6164 main.go:134] libmachine: Decoding PEM data...
	I0604 16:25:03.070823    6164 main.go:134] libmachine: Parsing certificate...
	I0604 16:25:03.081280    6164 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:25:04.181949    6164 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:25:04.181949    6164 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1006564s)
	I0604 16:25:04.188966    6164 network_create.go:272] running [docker network inspect default-k8s-different-port-20220604162205-5712] to gather additional debugging logs...
	I0604 16:25:04.188966    6164 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712
	W0604 16:25:05.292194    6164 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:05.292374    6164 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712: (1.1032157s)
	I0604 16:25:05.292374    6164 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220604162205-5712]: docker network inspect default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220604162205-5712
	I0604 16:25:05.292482    6164 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220604162205-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220604162205-5712
	
	** /stderr **
	I0604 16:25:05.301851    6164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:25:06.389411    6164 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0874576s)
	I0604 16:25:06.408244    6164 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007122d0] misses:0}
	I0604 16:25:06.408244    6164 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:25:06.408244    6164 network_create.go:115] attempt to create docker network default-k8s-different-port-20220604162205-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:25:06.415294    6164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712
	W0604 16:25:07.608429    6164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:07.608429    6164 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: (1.1929167s)
	E0604 16:25:07.608654    6164 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220604162205-5712 192.168.49.0/24: create docker network default-k8s-different-port-20220604162205-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network de02aa3724266db884988cac2eef4b6bfdabe504f2e1ee46b7e0b8aa051e80fe (br-de02aa372426): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:25:07.609069    6164 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220604162205-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network de02aa3724266db884988cac2eef4b6bfdabe504f2e1ee46b7e0b8aa051e80fe (br-de02aa372426): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220604162205-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network de02aa3724266db884988cac2eef4b6bfdabe504f2e1ee46b7e0b8aa051e80fe (br-de02aa372426): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:25:07.626768    6164 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:25:08.712303    6164 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0854336s)
	I0604 16:25:08.720109    6164 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:25:09.767482    6164 cli_runner.go:211] docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:25:09.767482    6164 cli_runner.go:217] Completed: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0473619s)
	I0604 16:25:09.767482    6164 client.go:171] LocalClient.Create took 6.6979605s
	I0604 16:25:11.783405    6164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:25:11.789514    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:25:12.880838    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:12.880878    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0912102s)
	I0604 16:25:12.880958    6164 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:13.063534    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:25:14.161277    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:14.161277    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0977308s)
	W0604 16:25:14.161277    6164 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:25:14.161277    6164 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:14.176822    6164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:25:14.184696    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:25:15.265546    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:15.265772    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0806753s)
	I0604 16:25:15.265852    6164 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:15.485857    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:25:16.572579    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:16.572640    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0865265s)
	W0604 16:25:16.572772    6164 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:25:16.572772    6164 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:16.572772    6164 start.go:134] duration metric: createHost completed in 13.5080997s
	I0604 16:25:16.583600    6164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:25:16.590149    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:25:17.634010    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:17.634010    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0438491s)
	I0604 16:25:17.634010    6164 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:17.976107    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:25:19.036658    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:19.036732    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0602813s)
	W0604 16:25:19.036732    6164 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:25:19.036732    6164 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:19.049452    6164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:25:19.056183    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:25:20.076701    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:20.076701    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0205072s)
	I0604 16:25:20.076701    6164 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:20.311898    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:25:21.376924    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:21.376924    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0649695s)
	W0604 16:25:21.376924    6164 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:25:21.376924    6164 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:21.376924    6164 fix.go:57] fixHost completed within 50.4947034s
	I0604 16:25:21.376924    6164 start.go:81] releasing machines lock for "default-k8s-different-port-20220604162205-5712", held for 50.4947034s
	W0604 16:25:21.376924    6164 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	W0604 16:25:21.377902    6164 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	
	I0604 16:25:21.377986    6164 start.go:614] Will try again in 5 seconds ...
	I0604 16:25:26.383993    6164 start.go:352] acquiring machines lock for default-k8s-different-port-20220604162205-5712: {Name:mka7c4079f67ca8a42486acaf1dd6d7206313e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:25:26.383993    6164 start.go:356] acquired machines lock for "default-k8s-different-port-20220604162205-5712" in 0s
	I0604 16:25:26.383993    6164 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:25:26.383993    6164 fix.go:55] fixHost starting: 
	I0604 16:25:26.398796    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:25:27.511706    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:27.511765    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.1127471s)
	I0604 16:25:27.511814    6164 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220604162205-5712: state= err=unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:27.511814    6164 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:25:27.515632    6164 out.go:177] * docker "default-k8s-different-port-20220604162205-5712" container is missing, will recreate.
	I0604 16:25:27.517601    6164 delete.go:124] DEMOLISHING default-k8s-different-port-20220604162205-5712 ...
	I0604 16:25:27.531687    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:25:28.616972    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:28.617066    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0852221s)
	W0604 16:25:28.617066    6164 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:28.617205    6164 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:28.641351    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:25:29.724093    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:29.724093    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0827293s)
	I0604 16:25:29.724093    6164 delete.go:82] Unable to get host status for default-k8s-different-port-20220604162205-5712, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:29.735595    6164 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712
	W0604 16:25:30.833836    6164 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:30.834038    6164 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712: (1.0982287s)
	I0604 16:25:30.834095    6164 kic.go:356] could not find the container default-k8s-different-port-20220604162205-5712 to remove it. will try anyways
	I0604 16:25:30.842425    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:25:31.920337    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:31.920419    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0776553s)
	W0604 16:25:31.920419    6164 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:31.927875    6164 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-different-port-20220604162205-5712 /bin/bash -c "sudo init 0"
	W0604 16:25:33.022216    6164 cli_runner.go:211] docker exec --privileged -t default-k8s-different-port-20220604162205-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:25:33.022216    6164 cli_runner.go:217] Completed: docker exec --privileged -t default-k8s-different-port-20220604162205-5712 /bin/bash -c "sudo init 0": (1.0943282s)
	I0604 16:25:33.022216    6164 oci.go:625] error shutdown default-k8s-different-port-20220604162205-5712: docker exec --privileged -t default-k8s-different-port-20220604162205-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:34.042678    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:25:35.104655    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:35.104655    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0619648s)
	I0604 16:25:35.104994    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:35.104994    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:25:35.104994    6164 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:35.599925    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:25:36.700765    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:36.700765    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.1006594s)
	I0604 16:25:36.700765    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:36.700765    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:25:36.700765    6164 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:37.307177    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:25:38.400108    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:38.400108    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0929192s)
	I0604 16:25:38.400108    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:38.400108    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:25:38.400108    6164 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:39.303698    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:25:40.425889    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:40.425889    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.1221784s)
	I0604 16:25:40.425889    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:40.425889    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:25:40.425889    6164 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:42.430776    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:25:43.517019    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:43.517077    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.0861487s)
	I0604 16:25:43.517144    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:43.517341    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:25:43.517393    6164 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:45.350404    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:25:46.503767    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:46.503767    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.1533022s)
	I0604 16:25:46.503767    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:46.503767    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:25:46.503767    6164 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:49.195264    6164 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:25:50.331789    6164 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:50.332015    6164 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (1.1365128s)
	I0604 16:25:50.332084    6164 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:25:50.332128    6164 oci.go:639] temporary error: container default-k8s-different-port-20220604162205-5712 status is  but expect it to be exited
	I0604 16:25:50.332128    6164 oci.go:88] couldn't shut down default-k8s-different-port-20220604162205-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	 
	I0604 16:25:50.340331    6164 cli_runner.go:164] Run: docker rm -f -v default-k8s-different-port-20220604162205-5712
	I0604 16:25:51.417421    6164 cli_runner.go:217] Completed: docker rm -f -v default-k8s-different-port-20220604162205-5712: (1.0770778s)
	I0604 16:25:51.435544    6164 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712
	W0604 16:25:52.518494    6164 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:52.518551    6164 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220604162205-5712: (1.0828848s)
	I0604 16:25:52.526874    6164 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:25:53.672736    6164 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:25:53.672803    6164 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1458492s)
	I0604 16:25:53.679988    6164 network_create.go:272] running [docker network inspect default-k8s-different-port-20220604162205-5712] to gather additional debugging logs...
	I0604 16:25:53.679988    6164 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712
	W0604 16:25:54.810855    6164 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:54.810855    6164 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712: (1.1308546s)
	I0604 16:25:54.810855    6164 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220604162205-5712]: docker network inspect default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220604162205-5712
	I0604 16:25:54.810855    6164 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220604162205-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220604162205-5712
	
	** /stderr **
	W0604 16:25:54.812187    6164 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:25:54.812187    6164 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:25:55.819338    6164 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:25:55.823838    6164 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:25:55.823838    6164 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220604162205-5712" (driver="docker")
	I0604 16:25:55.823838    6164 client.go:168] LocalClient.Create starting
	I0604 16:25:55.824474    6164 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:25:55.824474    6164 main.go:134] libmachine: Decoding PEM data...
	I0604 16:25:55.824474    6164 main.go:134] libmachine: Parsing certificate...
	I0604 16:25:55.825151    6164 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:25:55.825151    6164 main.go:134] libmachine: Decoding PEM data...
	I0604 16:25:55.825151    6164 main.go:134] libmachine: Parsing certificate...
	I0604 16:25:55.833043    6164 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:25:56.935676    6164 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:25:56.935711    6164 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1025484s)
	I0604 16:25:56.944276    6164 network_create.go:272] running [docker network inspect default-k8s-different-port-20220604162205-5712] to gather additional debugging logs...
	I0604 16:25:56.944276    6164 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220604162205-5712
	W0604 16:25:58.109832    6164 cli_runner.go:211] docker network inspect default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:25:58.109925    6164 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220604162205-5712: (1.1654471s)
	I0604 16:25:58.110048    6164 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220604162205-5712]: docker network inspect default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220604162205-5712
	I0604 16:25:58.110155    6164 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220604162205-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220604162205-5712
	
	** /stderr **
	I0604 16:25:58.120795    6164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:25:59.208116    6164 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.086759s)
	I0604 16:25:59.224934    6164 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007122d0] amended:false}} dirty:map[] misses:0}
	I0604 16:25:59.224934    6164 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:25:59.242111    6164 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007122d0] amended:true}} dirty:map[192.168.49.0:0xc0007122d0 192.168.58.0:0xc0005b6120] misses:0}
	I0604 16:25:59.242111    6164 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:25:59.242111    6164 network_create.go:115] attempt to create docker network default-k8s-different-port-20220604162205-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:25:59.249051    6164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712
	W0604 16:26:00.330498    6164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:26:00.330498    6164 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: (1.0813745s)
	E0604 16:26:00.330498    6164 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220604162205-5712 192.168.58.0/24: create docker network default-k8s-different-port-20220604162205-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9009a5056ec2be28af45e0725ff1dd5951c559a5a756e9d6a6f49ef93e7548d2 (br-9009a5056ec2): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:26:00.332269    6164 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220604162205-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9009a5056ec2be28af45e0725ff1dd5951c559a5a756e9d6a6f49ef93e7548d2 (br-9009a5056ec2): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220604162205-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9009a5056ec2be28af45e0725ff1dd5951c559a5a756e9d6a6f49ef93e7548d2 (br-9009a5056ec2): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:26:00.347211    6164 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:26:01.434471    6164 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.087248s)
	I0604 16:26:01.442145    6164 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:26:02.530396    6164 cli_runner.go:211] docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:26:02.530396    6164 cli_runner.go:217] Completed: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0882386s)
	I0604 16:26:02.530396    6164 client.go:171] LocalClient.Create took 6.7064836s
	I0604 16:26:04.551775    6164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:26:04.560292    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:26:05.680241    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:26:05.680241    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.1199365s)
	I0604 16:26:05.680241    6164 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:26:05.959210    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:26:07.086020    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:26:07.086020    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.126798s)
	W0604 16:26:07.086020    6164 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:26:07.086020    6164 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:26:07.096030    6164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:26:07.103017    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:26:08.146733    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:26:08.146733    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0437048s)
	I0604 16:26:08.146967    6164 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:26:08.356498    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:26:09.416271    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:26:09.416271    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0597617s)
	W0604 16:26:09.416271    6164 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:26:09.416271    6164 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:26:09.416271    6164 start.go:134] duration metric: createHost completed in 13.596783s
	I0604 16:26:09.426257    6164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:26:09.436333    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:26:10.500392    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:26:10.500392    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0637642s)
	I0604 16:26:10.500522    6164 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:26:10.829030    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:26:11.875890    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:26:11.875890    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0468011s)
	W0604 16:26:11.875890    6164 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:26:11.875890    6164 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:26:11.885896    6164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:26:11.891921    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:26:12.995572    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:26:12.995572    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.1036383s)
	I0604 16:26:12.995572    6164 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:26:13.351577    6164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712
	W0604 16:26:14.448357    6164 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712 returned with exit code 1
	I0604 16:26:14.448630    6164 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: (1.0967677s)
	W0604 16:26:14.448842    6164 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:26:14.448842    6164 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220604162205-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220604162205-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	I0604 16:26:14.448842    6164 fix.go:57] fixHost completed within 48.0643173s
	I0604 16:26:14.448842    6164 start.go:81] releasing machines lock for "default-k8s-different-port-20220604162205-5712", held for 48.0643173s
	W0604 16:26:14.448842    6164 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220604162205-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220604162205-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	
	I0604 16:26:14.456056    6164 out.go:177] 
	W0604 16:26:14.458182    6164 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220604162205-5712 container: docker volume create default-k8s-different-port-20220604162205-5712 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220604162205-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220604162205-5712: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220604162205-5712: read-only file system
	
	W0604 16:26:14.458669    6164 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:26:14.458718    6164 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:26:14.461878    6164 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220604162205-5712 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220604162205-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220604162205-5712: exit status 1 (1.1767361s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (2.9732206s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:26:18.829663    7112 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220604162205-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (118.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20220604161400-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-20220604161400-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: exit status 60 (1m17.2530553s)

                                                
                                                
-- stdout --
	* [kindnet-20220604161400-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kindnet-20220604161400-5712 in cluster kindnet-20220604161400-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kindnet-20220604161400-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:24:29.908668    7940 out.go:296] Setting OutFile to fd 2040 ...
	I0604 16:24:29.968337    7940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:24:29.968337    7940 out.go:309] Setting ErrFile to fd 1816...
	I0604 16:24:29.968337    7940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:24:29.979058    7940 out.go:303] Setting JSON to false
	I0604 16:24:29.981418    7940 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10942,"bootTime":1654348927,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:24:29.981418    7940 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:24:29.999635    7940 out.go:177] * [kindnet-20220604161400-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:24:30.003548    7940 notify.go:193] Checking for updates...
	I0604 16:24:30.007111    7940 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:24:30.010108    7940 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:24:30.012247    7940 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:24:30.014910    7940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:24:30.017846    7940 config.go:178] Loaded profile config "auto-20220604161352-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:24:30.017846    7940 config.go:178] Loaded profile config "default-k8s-different-port-20220604162205-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:24:30.018925    7940 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:24:30.019148    7940 config.go:178] Loaded profile config "newest-cni-20220604162348-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:24:30.019148    7940 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:24:32.779723    7940 docker.go:137] docker version: linux-20.10.16
	I0604 16:24:32.788333    7940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:24:34.936682    7940 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1480805s)
	I0604 16:24:34.937419    7940 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:24:33.8671228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:24:34.951455    7940 out.go:177] * Using the docker driver based on user configuration
	I0604 16:24:34.956438    7940 start.go:284] selected driver: docker
	I0604 16:24:34.956438    7940 start.go:806] validating driver "docker" against <nil>
	I0604 16:24:34.956438    7940 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:24:35.036327    7940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:24:37.105660    7940 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0690267s)
	I0604 16:24:37.105982    7940 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:24:36.0940722 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:24:37.106198    7940 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:24:37.106775    7940 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:24:37.109364    7940 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:24:37.111616    7940 cni.go:95] Creating CNI manager for "kindnet"
	I0604 16:24:37.111616    7940 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0604 16:24:37.111616    7940 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0604 16:24:37.111616    7940 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0604 16:24:37.111616    7940 start_flags.go:306] config:
	{Name:kindnet-20220604161400-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220604161400-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:24:37.113779    7940 out.go:177] * Starting control plane node kindnet-20220604161400-5712 in cluster kindnet-20220604161400-5712
	I0604 16:24:37.117411    7940 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:24:37.120319    7940 out.go:177] * Pulling base image ...
	I0604 16:24:37.123726    7940 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:24:37.123886    7940 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:24:37.123968    7940 cache.go:57] Caching tarball of preloaded images
	I0604 16:24:37.124105    7940 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:24:37.124105    7940 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:24:37.124105    7940 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:24:37.124683    7940 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-20220604161400-5712\config.json ...
	I0604 16:24:37.125006    7940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-20220604161400-5712\config.json: {Name:mk6ab86aacb45a4f74393e56f88e7047bf57de56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:24:38.190052    7940 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:24:38.190247    7940 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:24:38.190247    7940 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:24:38.190247    7940 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:24:38.190247    7940 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:24:38.190247    7940 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:24:38.190247    7940 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:24:38.190831    7940 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:24:38.190831    7940 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:24:40.491461    7940 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:24:40.491497    7940 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:24:40.491649    7940 start.go:352] acquiring machines lock for kindnet-20220604161400-5712: {Name:mk6a08a1e499215f3d61e6fa72d625bb41f63c15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:24:40.491930    7940 start.go:356] acquired machines lock for "kindnet-20220604161400-5712" in 254.8µs
	I0604 16:24:40.491961    7940 start.go:91] Provisioning new machine with config: &{Name:kindnet-20220604161400-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220604161400-5712 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:24:40.491961    7940 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:24:40.495267    7940 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:24:40.495736    7940 start.go:165] libmachine.API.Create for "kindnet-20220604161400-5712" (driver="docker")
	I0604 16:24:40.495736    7940 client.go:168] LocalClient.Create starting
	I0604 16:24:40.495736    7940 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:24:40.496525    7940 main.go:134] libmachine: Decoding PEM data...
	I0604 16:24:40.496550    7940 main.go:134] libmachine: Parsing certificate...
	I0604 16:24:40.496550    7940 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:24:40.496550    7940 main.go:134] libmachine: Decoding PEM data...
	I0604 16:24:40.496550    7940 main.go:134] libmachine: Parsing certificate...
	I0604 16:24:40.506598    7940 cli_runner.go:164] Run: docker network inspect kindnet-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:24:41.581035    7940 cli_runner.go:211] docker network inspect kindnet-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:24:41.581035    7940 cli_runner.go:217] Completed: docker network inspect kindnet-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0744252s)
	I0604 16:24:41.589321    7940 network_create.go:272] running [docker network inspect kindnet-20220604161400-5712] to gather additional debugging logs...
	I0604 16:24:41.589321    7940 cli_runner.go:164] Run: docker network inspect kindnet-20220604161400-5712
	W0604 16:24:42.664672    7940 cli_runner.go:211] docker network inspect kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:24:42.664672    7940 cli_runner.go:217] Completed: docker network inspect kindnet-20220604161400-5712: (1.0753391s)
	I0604 16:24:42.664672    7940 network_create.go:275] error running [docker network inspect kindnet-20220604161400-5712]: docker network inspect kindnet-20220604161400-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220604161400-5712
	I0604 16:24:42.664672    7940 network_create.go:277] output of [docker network inspect kindnet-20220604161400-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220604161400-5712
	
	** /stderr **
	I0604 16:24:42.672946    7940 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:24:43.776418    7940 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1032418s)
	I0604 16:24:43.800044    7940 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000a4aff8] misses:0}
	I0604 16:24:43.800372    7940 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:24:43.800372    7940 network_create.go:115] attempt to create docker network kindnet-20220604161400-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:24:43.807706    7940 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220604161400-5712
	W0604 16:24:44.876193    7940 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:24:44.876416    7940 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220604161400-5712: (1.0684749s)
	E0604 16:24:44.876525    7940 network_create.go:104] error while trying to create docker network kindnet-20220604161400-5712 192.168.49.0/24: create docker network kindnet-20220604161400-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220604161400-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e3a5bca952750bf1dff6c30a897a986296ef24ee4b253dff201bc2395f92f836 (br-e3a5bca95275): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:24:44.876550    7940 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220604161400-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220604161400-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e3a5bca952750bf1dff6c30a897a986296ef24ee4b253dff201bc2395f92f836 (br-e3a5bca95275): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220604161400-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220604161400-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e3a5bca952750bf1dff6c30a897a986296ef24ee4b253dff201bc2395f92f836 (br-e3a5bca95275): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:24:44.892856    7940 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:24:45.995906    7940 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1029997s)
	I0604 16:24:46.003655    7940 cli_runner.go:164] Run: docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:24:47.064198    7940 cli_runner.go:211] docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:24:47.064226    7940 cli_runner.go:217] Completed: docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0604162s)
	I0604 16:24:47.064316    7940 client.go:171] LocalClient.Create took 6.5685084s
	I0604 16:24:49.087203    7940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:24:49.092536    7940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712
	W0604 16:24:50.151935    7940 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:24:50.151935    7940 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: (1.0593878s)
	I0604 16:24:50.151935    7940 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:24:50.446844    7940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712
	W0604 16:24:51.530942    7940 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:24:51.530942    7940 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: (1.0840859s)
	W0604 16:24:51.530942    7940 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	
	W0604 16:24:51.530942    7940 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:24:51.539941    7940 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:24:51.546956    7940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712
	W0604 16:24:52.619802    7940 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:24:52.620037    7940 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: (1.0728342s)
	I0604 16:24:52.620116    7940 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:24:52.926566    7940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712
	W0604 16:24:53.985341    7940 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:24:53.985341    7940 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: (1.0587634s)
	W0604 16:24:53.985341    7940 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	
	W0604 16:24:53.985341    7940 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:24:53.985341    7940 start.go:134] duration metric: createHost completed in 13.4932326s
	I0604 16:24:53.985341    7940 start.go:81] releasing machines lock for "kindnet-20220604161400-5712", held for 13.4932326s
	W0604 16:24:53.985341    7940 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for kindnet-20220604161400-5712 container: docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220604161400-5712': mkdir /var/lib/docker/volumes/kindnet-20220604161400-5712: read-only file system
	I0604 16:24:54.000469    7940 cli_runner.go:164] Run: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}
	W0604 16:24:55.027961    7940 cli_runner.go:211] docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:24:55.028163    7940 cli_runner.go:217] Completed: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: (1.0272748s)
	I0604 16:24:55.028252    7940 delete.go:82] Unable to get host status for kindnet-20220604161400-5712, assuming it has already been deleted: state: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	W0604 16:24:55.028252    7940 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kindnet-20220604161400-5712 container: docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220604161400-5712': mkdir /var/lib/docker/volumes/kindnet-20220604161400-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kindnet-20220604161400-5712 container: docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220604161400-5712': mkdir /var/lib/docker/volumes/kindnet-20220604161400-5712: read-only file system
	
	I0604 16:24:55.028252    7940 start.go:614] Will try again in 5 seconds ...
	I0604 16:25:00.034187    7940 start.go:352] acquiring machines lock for kindnet-20220604161400-5712: {Name:mk6a08a1e499215f3d61e6fa72d625bb41f63c15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:25:00.034187    7940 start.go:356] acquired machines lock for "kindnet-20220604161400-5712" in 0s
	I0604 16:25:00.034187    7940 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:25:00.034187    7940 fix.go:55] fixHost starting: 
	I0604 16:25:00.049078    7940 cli_runner.go:164] Run: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}
	W0604 16:25:01.173833    7940 cli_runner.go:211] docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:01.173833    7940 cli_runner.go:217] Completed: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: (1.1244457s)
	I0604 16:25:01.173833    7940 fix.go:103] recreateIfNeeded on kindnet-20220604161400-5712: state= err=unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:01.173833    7940 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:25:01.178587    7940 out.go:177] * docker "kindnet-20220604161400-5712" container is missing, will recreate.
	I0604 16:25:01.181002    7940 delete.go:124] DEMOLISHING kindnet-20220604161400-5712 ...
	I0604 16:25:01.193616    7940 cli_runner.go:164] Run: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}
	W0604 16:25:02.304411    7940 cli_runner.go:211] docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:02.304411    7940 cli_runner.go:217] Completed: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: (1.1106199s)
	W0604 16:25:02.304617    7940 stop.go:75] unable to get state: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:02.304617    7940 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:02.320065    7940 cli_runner.go:164] Run: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}
	W0604 16:25:03.375821    7940 cli_runner.go:211] docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:03.375821    7940 cli_runner.go:217] Completed: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: (1.055744s)
	I0604 16:25:03.375821    7940 delete.go:82] Unable to get host status for kindnet-20220604161400-5712, assuming it has already been deleted: state: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:03.382834    7940 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kindnet-20220604161400-5712
	W0604 16:25:04.446427    7940 cli_runner.go:211] docker container inspect -f {{.Id}} kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:04.446427    7940 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kindnet-20220604161400-5712: (1.0635816s)
	I0604 16:25:04.446427    7940 kic.go:356] could not find the container kindnet-20220604161400-5712 to remove it. will try anyways
	I0604 16:25:04.454860    7940 cli_runner.go:164] Run: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}
	W0604 16:25:05.620414    7940 cli_runner.go:211] docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:05.620414    7940 cli_runner.go:217] Completed: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: (1.1655417s)
	W0604 16:25:05.620414    7940 oci.go:84] error getting container status, will try to delete anyways: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:05.627404    7940 cli_runner.go:164] Run: docker exec --privileged -t kindnet-20220604161400-5712 /bin/bash -c "sudo init 0"
	W0604 16:25:06.748465    7940 cli_runner.go:211] docker exec --privileged -t kindnet-20220604161400-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:25:06.748465    7940 cli_runner.go:217] Completed: docker exec --privileged -t kindnet-20220604161400-5712 /bin/bash -c "sudo init 0": (1.1210483s)
	I0604 16:25:06.748465    7940 oci.go:625] error shutdown kindnet-20220604161400-5712: docker exec --privileged -t kindnet-20220604161400-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:07.756661    7940 cli_runner.go:164] Run: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}
	W0604 16:25:08.884050    7940 cli_runner.go:211] docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:08.884050    7940 cli_runner.go:217] Completed: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: (1.1273768s)
	I0604 16:25:08.884050    7940 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:08.884050    7940 oci.go:639] temporary error: container kindnet-20220604161400-5712 status is  but expect it to be exited
	I0604 16:25:08.884050    7940 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:09.368432    7940 cli_runner.go:164] Run: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}
	W0604 16:25:10.426447    7940 cli_runner.go:211] docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:10.426447    7940 cli_runner.go:217] Completed: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: (1.0580042s)
	I0604 16:25:10.426447    7940 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:10.426447    7940 oci.go:639] temporary error: container kindnet-20220604161400-5712 status is  but expect it to be exited
	I0604 16:25:10.426447    7940 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:11.327920    7940 cli_runner.go:164] Run: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}
	W0604 16:25:12.416190    7940 cli_runner.go:211] docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:12.416190    7940 cli_runner.go:217] Completed: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: (1.088204s)
	I0604 16:25:12.416241    7940 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:12.416241    7940 oci.go:639] temporary error: container kindnet-20220604161400-5712 status is  but expect it to be exited
	I0604 16:25:12.416241    7940 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:13.066662    7940 cli_runner.go:164] Run: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}
	W0604 16:25:14.192524    7940 cli_runner.go:211] docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:14.192524    7940 cli_runner.go:217] Completed: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: (1.1258493s)
	I0604 16:25:14.192524    7940 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:14.192524    7940 oci.go:639] temporary error: container kindnet-20220604161400-5712 status is  but expect it to be exited
	I0604 16:25:14.192524    7940 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:15.313487    7940 cli_runner.go:164] Run: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}
	W0604 16:25:16.399933    7940 cli_runner.go:211] docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:16.399933    7940 cli_runner.go:217] Completed: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: (1.0864343s)
	I0604 16:25:16.399933    7940 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:16.399933    7940 oci.go:639] temporary error: container kindnet-20220604161400-5712 status is  but expect it to be exited
	I0604 16:25:16.399933    7940 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:17.928862    7940 cli_runner.go:164] Run: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}
	W0604 16:25:19.002058    7940 cli_runner.go:211] docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:19.002058    7940 cli_runner.go:217] Completed: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: (1.0731843s)
	I0604 16:25:19.002058    7940 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:19.002058    7940 oci.go:639] temporary error: container kindnet-20220604161400-5712 status is  but expect it to be exited
	I0604 16:25:19.002058    7940 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:22.051702    7940 cli_runner.go:164] Run: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}
	W0604 16:25:23.088942    7940 cli_runner.go:211] docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:23.088942    7940 cli_runner.go:217] Completed: docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: (1.0362321s)
	I0604 16:25:23.088942    7940 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:23.088942    7940 oci.go:639] temporary error: container kindnet-20220604161400-5712 status is  but expect it to be exited
	I0604 16:25:23.088942    7940 oci.go:88] couldn't shut down kindnet-20220604161400-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kindnet-20220604161400-5712": docker container inspect kindnet-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	 
	I0604 16:25:23.096959    7940 cli_runner.go:164] Run: docker rm -f -v kindnet-20220604161400-5712
	I0604 16:25:24.160816    7940 cli_runner.go:217] Completed: docker rm -f -v kindnet-20220604161400-5712: (1.0636683s)
	I0604 16:25:24.167594    7940 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kindnet-20220604161400-5712
	W0604 16:25:25.216920    7940 cli_runner.go:211] docker container inspect -f {{.Id}} kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:25.217150    7940 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kindnet-20220604161400-5712: (1.0489316s)
	I0604 16:25:25.224095    7940 cli_runner.go:164] Run: docker network inspect kindnet-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:25:26.275585    7940 cli_runner.go:211] docker network inspect kindnet-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:25:26.275632    7940 cli_runner.go:217] Completed: docker network inspect kindnet-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0511822s)
	I0604 16:25:26.282559    7940 network_create.go:272] running [docker network inspect kindnet-20220604161400-5712] to gather additional debugging logs...
	I0604 16:25:26.282682    7940 cli_runner.go:164] Run: docker network inspect kindnet-20220604161400-5712
	W0604 16:25:27.373297    7940 cli_runner.go:211] docker network inspect kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:27.373297    7940 cli_runner.go:217] Completed: docker network inspect kindnet-20220604161400-5712: (1.0904839s)
	I0604 16:25:27.373384    7940 network_create.go:275] error running [docker network inspect kindnet-20220604161400-5712]: docker network inspect kindnet-20220604161400-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220604161400-5712
	I0604 16:25:27.373384    7940 network_create.go:277] output of [docker network inspect kindnet-20220604161400-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220604161400-5712
	
	** /stderr **
	W0604 16:25:27.374466    7940 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:25:27.374466    7940 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:25:28.385736    7940 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:25:28.389153    7940 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:25:28.389394    7940 start.go:165] libmachine.API.Create for "kindnet-20220604161400-5712" (driver="docker")
	I0604 16:25:28.389426    7940 client.go:168] LocalClient.Create starting
	I0604 16:25:28.389776    7940 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:25:28.390308    7940 main.go:134] libmachine: Decoding PEM data...
	I0604 16:25:28.390308    7940 main.go:134] libmachine: Parsing certificate...
	I0604 16:25:28.390604    7940 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:25:28.390787    7940 main.go:134] libmachine: Decoding PEM data...
	I0604 16:25:28.390787    7940 main.go:134] libmachine: Parsing certificate...
	I0604 16:25:28.399238    7940 cli_runner.go:164] Run: docker network inspect kindnet-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:25:29.487638    7940 cli_runner.go:211] docker network inspect kindnet-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:25:29.487638    7940 cli_runner.go:217] Completed: docker network inspect kindnet-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0883874s)
	I0604 16:25:29.495476    7940 network_create.go:272] running [docker network inspect kindnet-20220604161400-5712] to gather additional debugging logs...
	I0604 16:25:29.495476    7940 cli_runner.go:164] Run: docker network inspect kindnet-20220604161400-5712
	W0604 16:25:30.616973    7940 cli_runner.go:211] docker network inspect kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:30.616973    7940 cli_runner.go:217] Completed: docker network inspect kindnet-20220604161400-5712: (1.121484s)
	I0604 16:25:30.616973    7940 network_create.go:275] error running [docker network inspect kindnet-20220604161400-5712]: docker network inspect kindnet-20220604161400-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220604161400-5712
	I0604 16:25:30.616973    7940 network_create.go:277] output of [docker network inspect kindnet-20220604161400-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220604161400-5712
	
	** /stderr **
	I0604 16:25:30.624967    7940 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:25:31.702157    7940 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0770447s)
	I0604 16:25:31.719698    7940 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a4aff8] amended:false}} dirty:map[] misses:0}
	I0604 16:25:31.719698    7940 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:25:31.737643    7940 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a4aff8] amended:true}} dirty:map[192.168.49.0:0xc000a4aff8 192.168.58.0:0xc000a4b048] misses:0}
	I0604 16:25:31.737710    7940 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:25:31.737710    7940 network_create.go:115] attempt to create docker network kindnet-20220604161400-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:25:31.745331    7940 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220604161400-5712
	W0604 16:25:32.819389    7940 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:32.819389    7940 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220604161400-5712: (1.074046s)
	E0604 16:25:32.819389    7940 network_create.go:104] error while trying to create docker network kindnet-20220604161400-5712 192.168.58.0/24: create docker network kindnet-20220604161400-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220604161400-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bef21f30b46c5a9caec4a9d8c018c00692c75666a761662d9d3ab275e70f9bcc (br-bef21f30b46c): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:25:32.819389    7940 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220604161400-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220604161400-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bef21f30b46c5a9caec4a9d8c018c00692c75666a761662d9d3ab275e70f9bcc (br-bef21f30b46c): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220604161400-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220604161400-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bef21f30b46c5a9caec4a9d8c018c00692c75666a761662d9d3ab275e70f9bcc (br-bef21f30b46c): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:25:32.833341    7940 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:25:33.939315    7940 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1057184s)
	I0604 16:25:33.947355    7940 cli_runner.go:164] Run: docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:25:35.056978    7940 cli_runner.go:211] docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:25:35.056978    7940 cli_runner.go:217] Completed: docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: (1.1094937s)
	I0604 16:25:35.056978    7940 client.go:171] LocalClient.Create took 6.6674779s
	I0604 16:25:37.073899    7940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:25:37.080644    7940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712
	W0604 16:25:38.167652    7940 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:38.167791    7940 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: (1.0869957s)
	I0604 16:25:38.167791    7940 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:38.516233    7940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712
	W0604 16:25:39.591898    7940 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:39.591898    7940 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: (1.0756532s)
	W0604 16:25:39.591898    7940 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	
	W0604 16:25:39.591898    7940 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:39.601905    7940 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:25:39.607893    7940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712
	W0604 16:25:40.676195    7940 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:40.676195    7940 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: (1.0682899s)
	I0604 16:25:40.676195    7940 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:40.908616    7940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712
	W0604 16:25:42.025497    7940 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:42.025497    7940 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: (1.1168678s)
	W0604 16:25:42.025497    7940 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	
	W0604 16:25:42.025497    7940 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:42.025497    7940 start.go:134] duration metric: createHost completed in 13.6394208s
	I0604 16:25:42.035469    7940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:25:42.041509    7940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712
	W0604 16:25:43.125219    7940 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:43.125219    7940 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: (1.083573s)
	I0604 16:25:43.125219    7940 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:43.386926    7940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712
	W0604 16:25:44.464818    7940 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:44.464818    7940 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: (1.0778807s)
	W0604 16:25:44.464818    7940 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	
	W0604 16:25:44.464818    7940 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:44.473825    7940 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:25:44.480820    7940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712
	W0604 16:25:45.554305    7940 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:45.554305    7940 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: (1.0734728s)
	I0604 16:25:45.554305    7940 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:45.770519    7940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712
	W0604 16:25:46.878562    7940 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712 returned with exit code 1
	I0604 16:25:46.878597    7940 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: (1.1078546s)
	W0604 16:25:46.878972    7940 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	
	W0604 16:25:46.879040    7940 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220604161400-5712
	I0604 16:25:46.879040    7940 fix.go:57] fixHost completed within 46.8443356s
	I0604 16:25:46.879086    7940 start.go:81] releasing machines lock for "kindnet-20220604161400-5712", held for 46.8443819s
	W0604 16:25:46.879296    7940 out.go:239] * Failed to start docker container. Running "minikube delete -p kindnet-20220604161400-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220604161400-5712 container: docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220604161400-5712': mkdir /var/lib/docker/volumes/kindnet-20220604161400-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p kindnet-20220604161400-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220604161400-5712 container: docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220604161400-5712': mkdir /var/lib/docker/volumes/kindnet-20220604161400-5712: read-only file system
	
	I0604 16:25:46.884429    7940 out.go:177] 
	W0604 16:25:46.886155    7940 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220604161400-5712 container: docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220604161400-5712': mkdir /var/lib/docker/volumes/kindnet-20220604161400-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220604161400-5712 container: docker volume create kindnet-20220604161400-5712 --label name.minikube.sigs.k8s.io=kindnet-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220604161400-5712': mkdir /var/lib/docker/volumes/kindnet-20220604161400-5712: read-only file system
	
	W0604 16:25:46.886833    7940 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:25:46.886833    7940 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:25:46.893093    7940 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/kindnet/Start (77.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (26.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20220604162348-5712 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p newest-cni-20220604162348-5712 --alsologtostderr -v=3: exit status 82 (22.7073985s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-20220604162348-5712"  ...
	* Stopping node "newest-cni-20220604162348-5712"  ...
	* Stopping node "newest-cni-20220604162348-5712"  ...
	* Stopping node "newest-cni-20220604162348-5712"  ...
	* Stopping node "newest-cni-20220604162348-5712"  ...
	* Stopping node "newest-cni-20220604162348-5712"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:25:12.972480    5440 out.go:296] Setting OutFile to fd 1544 ...
	I0604 16:25:13.032354    5440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:25:13.032354    5440 out.go:309] Setting ErrFile to fd 1684...
	I0604 16:25:13.032354    5440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:25:13.045346    5440 out.go:303] Setting JSON to false
	I0604 16:25:13.045865    5440 daemonize_windows.go:44] trying to kill existing schedule stop for profile newest-cni-20220604162348-5712...
	I0604 16:25:13.058865    5440 ssh_runner.go:195] Run: systemctl --version
	I0604 16:25:13.069197    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:25:15.694993    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:25:15.694993    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (2.6257668s)
	I0604 16:25:15.705991    5440 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0604 16:25:15.712999    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:25:16.806370    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:25:16.806370    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0931834s)
	I0604 16:25:16.806370    5440 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:17.184691    5440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:25:18.215561    5440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:25:18.215561    5440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0308589s)
	I0604 16:25:18.215561    5440 openrc.go:165] stop output: 
	E0604 16:25:18.215561    5440 daemonize_windows.go:38] error terminating scheduled stop for profile newest-cni-20220604162348-5712: stopping schedule-stop service for profile newest-cni-20220604162348-5712: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:18.215561    5440 mustload.go:65] Loading cluster: newest-cni-20220604162348-5712
	I0604 16:25:18.216566    5440 config.go:178] Loaded profile config "newest-cni-20220604162348-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:25:18.216566    5440 stop.go:39] StopHost: newest-cni-20220604162348-5712
	I0604 16:25:18.220581    5440 out.go:177] * Stopping node "newest-cni-20220604162348-5712"  ...
	I0604 16:25:18.238567    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:25:19.302141    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:19.302321    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0633857s)
	W0604 16:25:19.302393    5440 stop.go:75] unable to get state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	W0604 16:25:19.302417    5440 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:19.302417    5440 retry.go:31] will retry after 937.714187ms: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:20.254793    5440 stop.go:39] StopHost: newest-cni-20220604162348-5712
	I0604 16:25:20.260175    5440 out.go:177] * Stopping node "newest-cni-20220604162348-5712"  ...
	I0604 16:25:20.276657    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:25:21.316049    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:21.316164    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0392541s)
	W0604 16:25:21.316266    5440 stop.go:75] unable to get state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	W0604 16:25:21.316358    5440 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:21.316448    5440 retry.go:31] will retry after 1.386956246s: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:22.710078    5440 stop.go:39] StopHost: newest-cni-20220604162348-5712
	I0604 16:25:22.715691    5440 out.go:177] * Stopping node "newest-cni-20220604162348-5712"  ...
	I0604 16:25:22.731480    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:25:23.796913    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:23.796985    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0653856s)
	W0604 16:25:23.797031    5440 stop.go:75] unable to get state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	W0604 16:25:23.797031    5440 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:23.797209    5440 retry.go:31] will retry after 2.670351914s: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:26.476823    5440 stop.go:39] StopHost: newest-cni-20220604162348-5712
	I0604 16:25:26.482190    5440 out.go:177] * Stopping node "newest-cni-20220604162348-5712"  ...
	I0604 16:25:26.499328    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:25:27.620340    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:27.620340    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.121s)
	W0604 16:25:27.620340    5440 stop.go:75] unable to get state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	W0604 16:25:27.620340    5440 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:27.620340    5440 retry.go:31] will retry after 1.909024939s: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:29.534669    5440 stop.go:39] StopHost: newest-cni-20220604162348-5712
	I0604 16:25:29.541510    5440 out.go:177] * Stopping node "newest-cni-20220604162348-5712"  ...
	I0604 16:25:29.559153    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:25:30.679998    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:30.679998    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1208322s)
	W0604 16:25:30.679998    5440 stop.go:75] unable to get state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	W0604 16:25:30.679998    5440 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:30.679998    5440 retry.go:31] will retry after 3.323628727s: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:34.005756    5440 stop.go:39] StopHost: newest-cni-20220604162348-5712
	I0604 16:25:34.013243    5440 out.go:177] * Stopping node "newest-cni-20220604162348-5712"  ...
	I0604 16:25:34.031742    5440 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:25:35.120779    5440 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:25:35.120945    5440 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0888273s)
	W0604 16:25:35.120989    5440 stop.go:75] unable to get state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	W0604 16:25:35.121048    5440 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:25:35.124270    5440 out.go:177] 
	W0604 16:25:35.126279    5440 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-20220604162348-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-20220604162348-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:25:35.126279    5440 out.go:239] * 
	* 
	W0604 16:25:35.384889    5440 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_53.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 16:25:35.389443    5440 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p newest-cni-20220604162348-5712 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220604162348-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220604162348-5712: exit status 1 (1.1287229s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712: exit status 7 (2.9658432s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:25:39.496508    1724 status.go:247] status error: host: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220604162348-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (26.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712: exit status 7 (2.982495s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:25:42.480669    1588 status.go:247] status error: host: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220604162348-5712 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220604162348-5712 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.9357222s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220604162348-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220604162348-5712: exit status 1 (1.1136207s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712: exit status 7 (3.0123256s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:25:49.546981    8332 status.go:247] status error: host: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220604162348-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (10.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (77.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20220604161407-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20220604161407-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 60 (1m17.6435708s)

                                                
                                                
-- stdout --
	* [cilium-20220604161407-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cilium-20220604161407-5712 in cluster cilium-20220604161407-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cilium-20220604161407-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:25:41.578445    7748 out.go:296] Setting OutFile to fd 1508 ...
	I0604 16:25:41.641573    7748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:25:41.641573    7748 out.go:309] Setting ErrFile to fd 1664...
	I0604 16:25:41.641573    7748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:25:41.655670    7748 out.go:303] Setting JSON to false
	I0604 16:25:41.657000    7748 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11013,"bootTime":1654348928,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:25:41.657968    7748 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:25:41.660080    7748 out.go:177] * [cilium-20220604161407-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:25:41.667337    7748 notify.go:193] Checking for updates...
	I0604 16:25:41.671814    7748 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:25:41.674922    7748 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:25:41.677922    7748 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:25:41.680221    7748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:25:41.683409    7748 config.go:178] Loaded profile config "default-k8s-different-port-20220604162205-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:25:41.683409    7748 config.go:178] Loaded profile config "kindnet-20220604161400-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:25:41.683988    7748 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:25:41.684170    7748 config.go:178] Loaded profile config "newest-cni-20220604162348-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:25:41.684170    7748 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:25:44.326462    7748 docker.go:137] docker version: linux-20.10.16
	I0604 16:25:44.335595    7748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:25:46.457374    7748 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1217558s)
	I0604 16:25:46.458657    7748 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:25:45.3798661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:25:46.464216    7748 out.go:177] * Using the docker driver based on user configuration
	I0604 16:25:46.467182    7748 start.go:284] selected driver: docker
	I0604 16:25:46.467240    7748 start.go:806] validating driver "docker" against <nil>
	I0604 16:25:46.467325    7748 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:25:46.548801    7748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:25:48.666291    7748 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1172834s)
	I0604 16:25:48.666572    7748 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:25:47.5971704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:25:48.666800    7748 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:25:48.667575    7748 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:25:48.671168    7748 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:25:48.673225    7748 cni.go:95] Creating CNI manager for "cilium"
	I0604 16:25:48.673225    7748 start_flags.go:301] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0604 16:25:48.673225    7748 start_flags.go:306] config:
	{Name:cilium-20220604161407-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220604161407-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:25:48.675468    7748 out.go:177] * Starting control plane node cilium-20220604161407-5712 in cluster cilium-20220604161407-5712
	I0604 16:25:48.679485    7748 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:25:48.683431    7748 out.go:177] * Pulling base image ...
	I0604 16:25:48.685426    7748 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:25:48.685426    7748 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:25:48.685426    7748 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:25:48.685426    7748 cache.go:57] Caching tarball of preloaded images
	I0604 16:25:48.685426    7748 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:25:48.686508    7748 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:25:48.686730    7748 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-20220604161407-5712\config.json ...
	I0604 16:25:48.687004    7748 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-20220604161407-5712\config.json: {Name:mk5c133261b66e4715aa19c20451d2d42da5bcc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:25:49.767453    7748 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:25:49.767453    7748 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:25:49.767453    7748 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:25:49.767453    7748 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:25:49.767453    7748 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:25:49.767453    7748 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:25:49.767453    7748 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:25:49.767453    7748 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:25:49.767453    7748 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:25:52.140943    7748 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:25:52.140972    7748 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:25:52.141097    7748 start.go:352] acquiring machines lock for cilium-20220604161407-5712: {Name:mkb68e593c7e4da83ff933fe932b19b62d40c25c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:25:52.141097    7748 start.go:356] acquired machines lock for "cilium-20220604161407-5712" in 0s
	I0604 16:25:52.141097    7748 start.go:91] Provisioning new machine with config: &{Name:cilium-20220604161407-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220604161407-5712 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:25:52.141633    7748 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:25:52.145275    7748 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:25:52.145275    7748 start.go:165] libmachine.API.Create for "cilium-20220604161407-5712" (driver="docker")
	I0604 16:25:52.145275    7748 client.go:168] LocalClient.Create starting
	I0604 16:25:52.146329    7748 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:25:52.146329    7748 main.go:134] libmachine: Decoding PEM data...
	I0604 16:25:52.146329    7748 main.go:134] libmachine: Parsing certificate...
	I0604 16:25:52.146329    7748 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:25:52.146848    7748 main.go:134] libmachine: Decoding PEM data...
	I0604 16:25:52.147006    7748 main.go:134] libmachine: Parsing certificate...
	I0604 16:25:52.156303    7748 cli_runner.go:164] Run: docker network inspect cilium-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:25:53.257095    7748 cli_runner.go:211] docker network inspect cilium-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:25:53.257095    7748 cli_runner.go:217] Completed: docker network inspect cilium-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1007794s)
	I0604 16:25:53.266653    7748 network_create.go:272] running [docker network inspect cilium-20220604161407-5712] to gather additional debugging logs...
	I0604 16:25:53.266653    7748 cli_runner.go:164] Run: docker network inspect cilium-20220604161407-5712
	W0604 16:25:54.390760    7748 cli_runner.go:211] docker network inspect cilium-20220604161407-5712 returned with exit code 1
	I0604 16:25:54.390760    7748 cli_runner.go:217] Completed: docker network inspect cilium-20220604161407-5712: (1.1240003s)
	I0604 16:25:54.390838    7748 network_create.go:275] error running [docker network inspect cilium-20220604161407-5712]: docker network inspect cilium-20220604161407-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220604161407-5712
	I0604 16:25:54.390932    7748 network_create.go:277] output of [docker network inspect cilium-20220604161407-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220604161407-5712
	
	** /stderr **
	I0604 16:25:54.401491    7748 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:25:55.532337    7748 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1306534s)
	I0604 16:25:55.552603    7748 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005be6a8] misses:0}
	I0604 16:25:55.552603    7748 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:25:55.552603    7748 network_create.go:115] attempt to create docker network cilium-20220604161407-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:25:55.559527    7748 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220604161407-5712
	W0604 16:25:56.655689    7748 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220604161407-5712 returned with exit code 1
	I0604 16:25:56.658649    7748 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220604161407-5712: (1.0961501s)
	E0604 16:25:56.658649    7748 network_create.go:104] error while trying to create docker network cilium-20220604161407-5712 192.168.49.0/24: create docker network cilium-20220604161407-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 229bbea8dff4467436baaaf59769600a4eee30c8b58a9ce13b0e941c56a5fc47 (br-229bbea8dff4): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:25:56.659621    7748 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220604161407-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 229bbea8dff4467436baaaf59769600a4eee30c8b58a9ce13b0e941c56a5fc47 (br-229bbea8dff4): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220604161407-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 229bbea8dff4467436baaaf59769600a4eee30c8b58a9ce13b0e941c56a5fc47 (br-229bbea8dff4): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:25:56.673611    7748 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:25:57.798529    7748 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1245856s)
	I0604 16:25:57.806249    7748 cli_runner.go:164] Run: docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:25:58.917966    7748 cli_runner.go:211] docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:25:58.918092    7748 cli_runner.go:217] Completed: docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: (1.111604s)
	I0604 16:25:58.918092    7748 client.go:171] LocalClient.Create took 6.7722121s
	I0604 16:26:00.943798    7748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:26:00.949889    7748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712
	W0604 16:26:02.059031    7748 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:02.059031    7748 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: (1.1091298s)
	I0604 16:26:02.059031    7748 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:02.352561    7748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712
	W0604 16:26:03.467868    7748 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:03.467868    7748 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: (1.1150758s)
	W0604 16:26:03.467868    7748 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	
	W0604 16:26:03.467868    7748 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:03.479569    7748 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:26:03.487959    7748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712
	W0604 16:26:04.578294    7748 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:04.578294    7748 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: (1.0903236s)
	I0604 16:26:04.578294    7748 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:04.889722    7748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712
	W0604 16:26:05.994975    7748 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:05.995113    7748 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: (1.1051746s)
	W0604 16:26:05.995162    7748 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	
	W0604 16:26:05.995162    7748 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:05.995162    7748 start.go:134] duration metric: createHost completed in 13.8533766s
	I0604 16:26:05.995162    7748 start.go:81] releasing machines lock for "cilium-20220604161407-5712", held for 13.8539128s
	W0604 16:26:05.995544    7748 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for cilium-20220604161407-5712 container: docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/cilium-20220604161407-5712': mkdir /var/lib/docker/volumes/cilium-20220604161407-5712: read-only file system
	I0604 16:26:06.020211    7748 cli_runner.go:164] Run: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:07.133015    7748 cli_runner.go:211] docker container inspect cilium-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:07.133076    7748 cli_runner.go:217] Completed: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: (1.1124532s)
	I0604 16:26:07.133172    7748 delete.go:82] Unable to get host status for cilium-20220604161407-5712, assuming it has already been deleted: state: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	W0604 16:26:07.133515    7748 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cilium-20220604161407-5712 container: docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/cilium-20220604161407-5712': mkdir /var/lib/docker/volumes/cilium-20220604161407-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cilium-20220604161407-5712 container: docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/cilium-20220604161407-5712': mkdir /var/lib/docker/volumes/cilium-20220604161407-5712: read-only file system
	
	I0604 16:26:07.133609    7748 start.go:614] Will try again in 5 seconds ...
	I0604 16:26:12.144121    7748 start.go:352] acquiring machines lock for cilium-20220604161407-5712: {Name:mkb68e593c7e4da83ff933fe932b19b62d40c25c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:26:12.144121    7748 start.go:356] acquired machines lock for "cilium-20220604161407-5712" in 0s
	I0604 16:26:12.144121    7748 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:26:12.144121    7748 fix.go:55] fixHost starting: 
	I0604 16:26:12.160742    7748 cli_runner.go:164] Run: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:13.232098    7748 cli_runner.go:211] docker container inspect cilium-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:13.232098    7748 cli_runner.go:217] Completed: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: (1.0713448s)
	I0604 16:26:13.232098    7748 fix.go:103] recreateIfNeeded on cilium-20220604161407-5712: state= err=unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:13.232098    7748 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:26:13.237996    7748 out.go:177] * docker "cilium-20220604161407-5712" container is missing, will recreate.
	I0604 16:26:13.240391    7748 delete.go:124] DEMOLISHING cilium-20220604161407-5712 ...
	I0604 16:26:13.257084    7748 cli_runner.go:164] Run: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:14.326670    7748 cli_runner.go:211] docker container inspect cilium-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:14.326670    7748 cli_runner.go:217] Completed: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: (1.0695743s)
	W0604 16:26:14.326670    7748 stop.go:75] unable to get state: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:14.326670    7748 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:14.344673    7748 cli_runner.go:164] Run: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:15.483338    7748 cli_runner.go:211] docker container inspect cilium-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:15.483338    7748 cli_runner.go:217] Completed: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: (1.1383542s)
	I0604 16:26:15.483494    7748 delete.go:82] Unable to get host status for cilium-20220604161407-5712, assuming it has already been deleted: state: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:15.490627    7748 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-20220604161407-5712
	W0604 16:26:16.604621    7748 cli_runner.go:211] docker container inspect -f {{.Id}} cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:16.604751    7748 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} cilium-20220604161407-5712: (1.1139819s)
	I0604 16:26:16.604751    7748 kic.go:356] could not find the container cilium-20220604161407-5712 to remove it. will try anyways
	I0604 16:26:16.612506    7748 cli_runner.go:164] Run: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:17.662479    7748 cli_runner.go:211] docker container inspect cilium-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:17.662665    7748 cli_runner.go:217] Completed: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: (1.0499609s)
	W0604 16:26:17.662796    7748 oci.go:84] error getting container status, will try to delete anyways: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:17.673237    7748 cli_runner.go:164] Run: docker exec --privileged -t cilium-20220604161407-5712 /bin/bash -c "sudo init 0"
	W0604 16:26:18.736017    7748 cli_runner.go:211] docker exec --privileged -t cilium-20220604161407-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:26:18.736051    7748 cli_runner.go:217] Completed: docker exec --privileged -t cilium-20220604161407-5712 /bin/bash -c "sudo init 0": (1.0616752s)
	I0604 16:26:18.736102    7748 oci.go:625] error shutdown cilium-20220604161407-5712: docker exec --privileged -t cilium-20220604161407-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:19.756650    7748 cli_runner.go:164] Run: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:20.849913    7748 cli_runner.go:211] docker container inspect cilium-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:20.849972    7748 cli_runner.go:217] Completed: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: (1.0932164s)
	I0604 16:26:20.850217    7748 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:20.850277    7748 oci.go:639] temporary error: container cilium-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:20.850350    7748 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:21.328125    7748 cli_runner.go:164] Run: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:22.407687    7748 cli_runner.go:211] docker container inspect cilium-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:22.407879    7748 cli_runner.go:217] Completed: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: (1.0795506s)
	I0604 16:26:22.407930    7748 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:22.407930    7748 oci.go:639] temporary error: container cilium-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:22.407930    7748 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:23.311528    7748 cli_runner.go:164] Run: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:24.363329    7748 cli_runner.go:211] docker container inspect cilium-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:24.363329    7748 cli_runner.go:217] Completed: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: (1.0512044s)
	I0604 16:26:24.363329    7748 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:24.363329    7748 oci.go:639] temporary error: container cilium-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:24.363329    7748 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:25.012672    7748 cli_runner.go:164] Run: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:26.049022    7748 cli_runner.go:211] docker container inspect cilium-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:26.049022    7748 cli_runner.go:217] Completed: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: (1.0363383s)
	I0604 16:26:26.049022    7748 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:26.049022    7748 oci.go:639] temporary error: container cilium-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:26.049022    7748 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:27.179343    7748 cli_runner.go:164] Run: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:28.250351    7748 cli_runner.go:211] docker container inspect cilium-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:28.250558    7748 cli_runner.go:217] Completed: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: (1.0709954s)
	I0604 16:26:28.250651    7748 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:28.250651    7748 oci.go:639] temporary error: container cilium-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:28.250726    7748 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:29.779109    7748 cli_runner.go:164] Run: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:30.916489    7748 cli_runner.go:211] docker container inspect cilium-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:30.916536    7748 cli_runner.go:217] Completed: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: (1.1372063s)
	I0604 16:26:30.916643    7748 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:30.916677    7748 oci.go:639] temporary error: container cilium-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:30.916677    7748 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:33.977445    7748 cli_runner.go:164] Run: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:35.116346    7748 cli_runner.go:211] docker container inspect cilium-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:35.116346    7748 cli_runner.go:217] Completed: docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: (1.138888s)
	I0604 16:26:35.116346    7748 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:35.116346    7748 oci.go:639] temporary error: container cilium-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:35.116346    7748 oci.go:88] couldn't shut down cilium-20220604161407-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "cilium-20220604161407-5712": docker container inspect cilium-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	 
	I0604 16:26:35.123351    7748 cli_runner.go:164] Run: docker rm -f -v cilium-20220604161407-5712
	I0604 16:26:36.212414    7748 cli_runner.go:217] Completed: docker rm -f -v cilium-20220604161407-5712: (1.0890515s)
	I0604 16:26:36.222303    7748 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-20220604161407-5712
	W0604 16:26:37.326539    7748 cli_runner.go:211] docker container inspect -f {{.Id}} cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:37.326539    7748 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} cilium-20220604161407-5712: (1.104224s)
	I0604 16:26:37.332522    7748 cli_runner.go:164] Run: docker network inspect cilium-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:26:38.442766    7748 cli_runner.go:211] docker network inspect cilium-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:26:38.442830    7748 cli_runner.go:217] Completed: docker network inspect cilium-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1102314s)
	I0604 16:26:38.450108    7748 network_create.go:272] running [docker network inspect cilium-20220604161407-5712] to gather additional debugging logs...
	I0604 16:26:38.450108    7748 cli_runner.go:164] Run: docker network inspect cilium-20220604161407-5712
	W0604 16:26:39.554587    7748 cli_runner.go:211] docker network inspect cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:39.554587    7748 cli_runner.go:217] Completed: docker network inspect cilium-20220604161407-5712: (1.1042932s)
	I0604 16:26:39.554706    7748 network_create.go:275] error running [docker network inspect cilium-20220604161407-5712]: docker network inspect cilium-20220604161407-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220604161407-5712
	I0604 16:26:39.554706    7748 network_create.go:277] output of [docker network inspect cilium-20220604161407-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220604161407-5712
	
	** /stderr **
	W0604 16:26:39.555468    7748 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:26:39.555468    7748 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:26:40.570525    7748 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:26:40.574821    7748 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:26:40.575101    7748 start.go:165] libmachine.API.Create for "cilium-20220604161407-5712" (driver="docker")
	I0604 16:26:40.575101    7748 client.go:168] LocalClient.Create starting
	I0604 16:26:40.575101    7748 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:26:40.575641    7748 main.go:134] libmachine: Decoding PEM data...
	I0604 16:26:40.575882    7748 main.go:134] libmachine: Parsing certificate...
	I0604 16:26:40.576036    7748 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:26:40.576178    7748 main.go:134] libmachine: Decoding PEM data...
	I0604 16:26:40.576274    7748 main.go:134] libmachine: Parsing certificate...
	I0604 16:26:40.583835    7748 cli_runner.go:164] Run: docker network inspect cilium-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:26:41.658254    7748 cli_runner.go:211] docker network inspect cilium-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:26:41.658254    7748 cli_runner.go:217] Completed: docker network inspect cilium-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0743398s)
	I0604 16:26:41.666820    7748 network_create.go:272] running [docker network inspect cilium-20220604161407-5712] to gather additional debugging logs...
	I0604 16:26:41.666820    7748 cli_runner.go:164] Run: docker network inspect cilium-20220604161407-5712
	W0604 16:26:42.785815    7748 cli_runner.go:211] docker network inspect cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:42.785815    7748 cli_runner.go:217] Completed: docker network inspect cilium-20220604161407-5712: (1.1189833s)
	I0604 16:26:42.785815    7748 network_create.go:275] error running [docker network inspect cilium-20220604161407-5712]: docker network inspect cilium-20220604161407-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220604161407-5712
	I0604 16:26:42.785815    7748 network_create.go:277] output of [docker network inspect cilium-20220604161407-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220604161407-5712
	
	** /stderr **
	I0604 16:26:42.792818    7748 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:26:43.901839    7748 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1090081s)
	I0604 16:26:43.920848    7748 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005be6a8] amended:false}} dirty:map[] misses:0}
	I0604 16:26:43.920848    7748 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:26:43.938846    7748 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005be6a8] amended:true}} dirty:map[192.168.49.0:0xc0005be6a8 192.168.58.0:0xc000618140] misses:0}
	I0604 16:26:43.938846    7748 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:26:43.938846    7748 network_create.go:115] attempt to create docker network cilium-20220604161407-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:26:43.946103    7748 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220604161407-5712
	W0604 16:26:45.051285    7748 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:45.051285    7748 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220604161407-5712: (1.1048787s)
	E0604 16:26:45.051367    7748 network_create.go:104] error while trying to create docker network cilium-20220604161407-5712 192.168.58.0/24: create docker network cilium-20220604161407-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a8d85682032f4c0ebb5bc345a1ee4f221401e49561e1bdc7157c8a71af92174b (br-a8d85682032f): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:26:45.051513    7748 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220604161407-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a8d85682032f4c0ebb5bc345a1ee4f221401e49561e1bdc7157c8a71af92174b (br-a8d85682032f): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220604161407-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a8d85682032f4c0ebb5bc345a1ee4f221401e49561e1bdc7157c8a71af92174b (br-a8d85682032f): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:26:45.066153    7748 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:26:46.165327    7748 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0991618s)
	I0604 16:26:46.176088    7748 cli_runner.go:164] Run: docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:26:47.226912    7748 cli_runner.go:211] docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:26:47.226912    7748 cli_runner.go:217] Completed: docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: (1.050813s)
	I0604 16:26:47.226912    7748 client.go:171] LocalClient.Create took 6.6517376s
	I0604 16:26:49.251165    7748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:26:49.261147    7748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712
	W0604 16:26:50.324707    7748 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:50.324707    7748 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: (1.0635095s)
	I0604 16:26:50.324707    7748 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:50.666365    7748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712
	W0604 16:26:51.727489    7748 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:51.727554    7748 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: (1.0609447s)
	W0604 16:26:51.727554    7748 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	
	W0604 16:26:51.727554    7748 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:51.738768    7748 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:26:51.745248    7748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712
	W0604 16:26:52.797650    7748 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:52.797650    7748 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: (1.0523904s)
	I0604 16:26:52.797650    7748 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:53.032661    7748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712
	W0604 16:26:54.126097    7748 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:54.126097    7748 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: (1.093424s)
	W0604 16:26:54.126097    7748 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	
	W0604 16:26:54.126097    7748 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:54.126097    7748 start.go:134] duration metric: createHost completed in 13.5552685s
	I0604 16:26:54.136082    7748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:26:54.143036    7748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712
	W0604 16:26:55.220389    7748 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:55.220389    7748 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: (1.077341s)
	I0604 16:26:55.220389    7748 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:55.487554    7748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712
	W0604 16:26:56.544284    7748 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:56.544390    7748 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: (1.05667s)
	W0604 16:26:56.544428    7748 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	
	W0604 16:26:56.544428    7748 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:56.555307    7748 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:26:56.561313    7748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712
	W0604 16:26:57.645349    7748 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:57.645349    7748 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: (1.0840245s)
	I0604 16:26:57.645349    7748 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:57.859514    7748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712
	W0604 16:26:58.937518    7748 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712 returned with exit code 1
	I0604 16:26:58.937518    7748 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: (1.0772654s)
	W0604 16:26:58.937518    7748 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	
	W0604 16:26:58.937518    7748 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220604161407-5712
	I0604 16:26:58.937518    7748 fix.go:57] fixHost completed within 46.7928779s
	I0604 16:26:58.937518    7748 start.go:81] releasing machines lock for "cilium-20220604161407-5712", held for 46.7928779s
	W0604 16:26:58.938327    7748 out.go:239] * Failed to start docker container. Running "minikube delete -p cilium-20220604161407-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220604161407-5712 container: docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/cilium-20220604161407-5712': mkdir /var/lib/docker/volumes/cilium-20220604161407-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p cilium-20220604161407-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220604161407-5712 container: docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/cilium-20220604161407-5712': mkdir /var/lib/docker/volumes/cilium-20220604161407-5712: read-only file system
	
	I0604 16:26:58.947057    7748 out.go:177] 
	W0604 16:26:58.949060    7748 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220604161407-5712 container: docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/cilium-20220604161407-5712': mkdir /var/lib/docker/volumes/cilium-20220604161407-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220604161407-5712 container: docker volume create cilium-20220604161407-5712 --label name.minikube.sigs.k8s.io=cilium-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/cilium-20220604161407-5712': mkdir /var/lib/docker/volumes/cilium-20220604161407-5712: read-only file system
	
	W0604 16:26:58.949060    7748 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:26:58.949060    7748 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:26:58.956084    7748 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/cilium/Start (77.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (118.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220604162348-5712 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-20220604162348-5712 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m54.0046822s)

                                                
                                                
-- stdout --
	* [newest-cni-20220604162348-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node newest-cni-20220604162348-5712 in cluster newest-cni-20220604162348-5712
	* Pulling base image ...
	* docker "newest-cni-20220604162348-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-20220604162348-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:25:49.833466    6008 out.go:296] Setting OutFile to fd 1932 ...
	I0604 16:25:49.892403    6008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:25:49.892403    6008 out.go:309] Setting ErrFile to fd 1840...
	I0604 16:25:49.892403    6008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:25:49.902770    6008 out.go:303] Setting JSON to false
	I0604 16:25:49.906078    6008 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11021,"bootTime":1654348928,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:25:49.906170    6008 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:25:49.911222    6008 out.go:177] * [newest-cni-20220604162348-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:25:49.913496    6008 notify.go:193] Checking for updates...
	I0604 16:25:49.915551    6008 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:25:49.917935    6008 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:25:49.920027    6008 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:25:49.922883    6008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:25:49.925177    6008 config.go:178] Loaded profile config "newest-cni-20220604162348-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:25:49.925952    6008 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:25:52.688335    6008 docker.go:137] docker version: linux-20.10.16
	I0604 16:25:52.695377    6008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:25:54.858323    6008 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1627641s)
	I0604 16:25:54.859287    6008 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:25:53.7780469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:25:54.862877    6008 out.go:177] * Using the docker driver based on existing profile
	I0604 16:25:54.865869    6008 start.go:284] selected driver: docker
	I0604 16:25:54.865869    6008 start.go:806] validating driver "docker" against &{Name:newest-cni-20220604162348-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220604162348-5712 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false ku
belet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:25:54.865869    6008 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:25:54.932717    6008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:25:57.028417    6008 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0955829s)
	I0604 16:25:57.028417    6008 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:25:55.9824692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:25:57.029112    6008 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0604 16:25:57.029112    6008 cni.go:95] Creating CNI manager for ""
	I0604 16:25:57.029235    6008 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 16:25:57.029285    6008 start_flags.go:306] config:
	{Name:newest-cni-20220604162348-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220604162348-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledSt
op:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:25:57.033501    6008 out.go:177] * Starting control plane node newest-cni-20220604162348-5712 in cluster newest-cni-20220604162348-5712
	I0604 16:25:57.038796    6008 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:25:57.040555    6008 out.go:177] * Pulling base image ...
	I0604 16:25:57.044211    6008 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:25:57.044211    6008 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:25:57.044211    6008 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:25:57.044211    6008 cache.go:57] Caching tarball of preloaded images
	I0604 16:25:57.044211    6008 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:25:57.044211    6008 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:25:57.045177    6008 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20220604162348-5712\config.json ...
	I0604 16:25:58.155679    6008 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:25:58.155811    6008 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:25:58.156096    6008 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:25:58.156096    6008 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:25:58.156220    6008 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:25:58.156274    6008 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:25:58.156379    6008 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:25:58.156421    6008 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:25:58.156450    6008 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:26:00.554262    6008 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:26:00.554381    6008 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:26:00.554479    6008 start.go:352] acquiring machines lock for newest-cni-20220604162348-5712: {Name:mkbd6394023b53f3734496771860f87f29caa1c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:26:00.554676    6008 start.go:356] acquired machines lock for "newest-cni-20220604162348-5712" in 197.4µs
	I0604 16:26:00.554871    6008 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:26:00.554892    6008 fix.go:55] fixHost starting: 
	I0604 16:26:00.572507    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:01.683973    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:01.684106    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1114541s)
	I0604 16:26:01.684214    6008 fix.go:103] recreateIfNeeded on newest-cni-20220604162348-5712: state= err=unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:01.684272    6008 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:26:01.687132    6008 out.go:177] * docker "newest-cni-20220604162348-5712" container is missing, will recreate.
	I0604 16:26:01.689759    6008 delete.go:124] DEMOLISHING newest-cni-20220604162348-5712 ...
	I0604 16:26:01.704193    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:02.796548    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:02.796548    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0923424s)
	W0604 16:26:02.796548    6008 stop.go:75] unable to get state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:02.796548    6008 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:02.813521    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:03.939751    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:03.939959    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1262177s)
	I0604 16:26:03.940081    6008 delete.go:82] Unable to get host status for newest-cni-20220604162348-5712, assuming it has already been deleted: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:03.947800    6008 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220604162348-5712
	W0604 16:26:05.077585    6008 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:05.077585    6008 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220604162348-5712: (1.1297727s)
	I0604 16:26:05.077585    6008 kic.go:356] could not find the container newest-cni-20220604162348-5712 to remove it. will try anyways
	I0604 16:26:05.084614    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:06.195808    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:06.195808    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1111822s)
	W0604 16:26:06.195808    6008 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:06.202815    6008 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220604162348-5712 /bin/bash -c "sudo init 0"
	W0604 16:26:07.290043    6008 cli_runner.go:211] docker exec --privileged -t newest-cni-20220604162348-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:26:07.290043    6008 cli_runner.go:217] Completed: docker exec --privileged -t newest-cni-20220604162348-5712 /bin/bash -c "sudo init 0": (1.0872155s)
	I0604 16:26:07.290043    6008 oci.go:625] error shutdown newest-cni-20220604162348-5712: docker exec --privileged -t newest-cni-20220604162348-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:08.313493    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:09.385248    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:09.385310    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0716554s)
	I0604 16:26:09.385418    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:09.385491    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:26:09.385564    6008 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:09.963245    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:11.019868    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:11.019868    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0566118s)
	I0604 16:26:11.019868    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:11.019868    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:26:11.019868    6008 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:12.122782    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:13.247415    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:13.247415    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1245439s)
	I0604 16:26:13.247415    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:13.247415    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:26:13.247415    6008 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:14.582714    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:15.669147    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:15.669316    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0864208s)
	I0604 16:26:15.669380    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:15.669380    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:26:15.669380    6008 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:17.264974    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:18.375972    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:18.376011    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1099145s)
	I0604 16:26:18.376111    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:18.376111    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:26:18.376151    6008 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:20.734541    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:21.810082    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:21.810082    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0754378s)
	I0604 16:26:21.810082    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:21.810082    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:26:21.810082    6008 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:26.325144    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:27.362687    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:27.362687    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0375321s)
	I0604 16:26:27.362687    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:27.362687    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:26:27.362687    6008 oci.go:88] couldn't shut down newest-cni-20220604162348-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	 
	I0604 16:26:27.368666    6008 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220604162348-5712
	I0604 16:26:28.421123    6008 cli_runner.go:217] Completed: docker rm -f -v newest-cni-20220604162348-5712: (1.052309s)
	I0604 16:26:28.429281    6008 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220604162348-5712
	W0604 16:26:29.454343    6008 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:29.454343    6008 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220604162348-5712: (1.0248942s)
	I0604 16:26:29.462949    6008 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:26:30.521684    6008 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:26:30.521684    6008 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0587236s)
	I0604 16:26:30.528633    6008 network_create.go:272] running [docker network inspect newest-cni-20220604162348-5712] to gather additional debugging logs...
	I0604 16:26:30.528633    6008 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712
	W0604 16:26:31.647443    6008 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:31.647443    6008 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712: (1.1185945s)
	I0604 16:26:31.647443    6008 network_create.go:275] error running [docker network inspect newest-cni-20220604162348-5712]: docker network inspect newest-cni-20220604162348-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220604162348-5712
	I0604 16:26:31.647443    6008 network_create.go:277] output of [docker network inspect newest-cni-20220604162348-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220604162348-5712
	
	** /stderr **
	W0604 16:26:31.648835    6008 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:26:31.648835    6008 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:26:32.661902    6008 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:26:32.666423    6008 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:26:32.666730    6008 start.go:165] libmachine.API.Create for "newest-cni-20220604162348-5712" (driver="docker")
	I0604 16:26:32.666730    6008 client.go:168] LocalClient.Create starting
	I0604 16:26:32.667347    6008 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:26:32.667347    6008 main.go:134] libmachine: Decoding PEM data...
	I0604 16:26:32.667347    6008 main.go:134] libmachine: Parsing certificate...
	I0604 16:26:32.668043    6008 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:26:32.668118    6008 main.go:134] libmachine: Decoding PEM data...
	I0604 16:26:32.668118    6008 main.go:134] libmachine: Parsing certificate...
	I0604 16:26:32.676559    6008 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:26:33.766833    6008 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:26:33.766833    6008 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0902224s)
	I0604 16:26:33.784994    6008 network_create.go:272] running [docker network inspect newest-cni-20220604162348-5712] to gather additional debugging logs...
	I0604 16:26:33.784994    6008 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712
	W0604 16:26:34.896965    6008 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:34.900979    6008 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712: (1.1119582s)
	I0604 16:26:34.900979    6008 network_create.go:275] error running [docker network inspect newest-cni-20220604162348-5712]: docker network inspect newest-cni-20220604162348-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220604162348-5712
	I0604 16:26:34.900979    6008 network_create.go:277] output of [docker network inspect newest-cni-20220604162348-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220604162348-5712
	
	** /stderr **
	I0604 16:26:34.907965    6008 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:26:36.005066    6008 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0970889s)
	I0604 16:26:36.023008    6008 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005c8888] misses:0}
	I0604 16:26:36.023008    6008 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:26:36.023008    6008 network_create.go:115] attempt to create docker network newest-cni-20220604162348-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:26:36.031001    6008 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712
	W0604 16:26:37.138901    6008 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:37.138901    6008 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: (1.1078879s)
	E0604 16:26:37.138901    6008 network_create.go:104] error while trying to create docker network newest-cni-20220604162348-5712 192.168.49.0/24: create docker network newest-cni-20220604162348-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4f16b33b09f51b64f4eccdf5215d5228ecd4dc568b49731c4b8874aa01c9f23a (br-4f16b33b09f5): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:26:37.138901    6008 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220604162348-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4f16b33b09f51b64f4eccdf5215d5228ecd4dc568b49731c4b8874aa01c9f23a (br-4f16b33b09f5): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220604162348-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4f16b33b09f51b64f4eccdf5215d5228ecd4dc568b49731c4b8874aa01c9f23a (br-4f16b33b09f5): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:26:37.153930    6008 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:26:38.238797    6008 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0848547s)
	I0604 16:26:38.245800    6008 cli_runner.go:164] Run: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:26:39.333385    6008 cli_runner.go:211] docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:26:39.333456    6008 cli_runner.go:217] Completed: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0874238s)
	I0604 16:26:39.333456    6008 client.go:171] LocalClient.Create took 6.6666518s
	I0604 16:26:41.356572    6008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:26:41.362665    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:26:42.436158    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:42.436158    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0734805s)
	I0604 16:26:42.436158    6008 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:42.621292    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:26:43.717213    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:43.717213    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0959089s)
	W0604 16:26:43.717213    6008 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:26:43.717213    6008 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:43.731407    6008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:26:43.738779    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:26:44.865694    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:44.865774    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.1267738s)
	I0604 16:26:44.865829    6008 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:45.075132    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:26:46.197010    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:46.197010    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.1218276s)
	W0604 16:26:46.197010    6008 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:26:46.197010    6008 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:46.197010    6008 start.go:134] duration metric: createHost completed in 13.5349581s
	I0604 16:26:46.207014    6008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:26:46.214019    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:26:47.258196    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:47.258403    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0441649s)
	I0604 16:26:47.258514    6008 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:47.602777    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:26:48.650579    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:48.650616    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0477107s)
	W0604 16:26:48.650827    6008 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:26:48.650827    6008 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:48.663983    6008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:26:48.672822    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:26:49.758790    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:49.758943    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0859559s)
	I0604 16:26:49.759104    6008 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:50.001915    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:26:51.094392    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:26:51.094392    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0924656s)
	W0604 16:26:51.094392    6008 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:26:51.094392    6008 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:51.094392    6008 fix.go:57] fixHost completed within 50.538941s
	I0604 16:26:51.094392    6008 start.go:81] releasing machines lock for "newest-cni-20220604162348-5712", held for 50.5391568s
	W0604 16:26:51.094911    6008 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	W0604 16:26:51.094980    6008 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	
	I0604 16:26:51.094980    6008 start.go:614] Will try again in 5 seconds ...
	I0604 16:26:56.108503    6008 start.go:352] acquiring machines lock for newest-cni-20220604162348-5712: {Name:mkbd6394023b53f3734496771860f87f29caa1c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:26:56.108503    6008 start.go:356] acquired machines lock for "newest-cni-20220604162348-5712" in 0s
	I0604 16:26:56.108503    6008 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:26:56.108503    6008 fix.go:55] fixHost starting: 
	I0604 16:26:56.124266    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:57.178258    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:57.178258    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0539802s)
	I0604 16:26:57.178258    6008 fix.go:103] recreateIfNeeded on newest-cni-20220604162348-5712: state= err=unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:57.178258    6008 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:26:57.182258    6008 out.go:177] * docker "newest-cni-20220604162348-5712" container is missing, will recreate.
	I0604 16:26:57.186258    6008 delete.go:124] DEMOLISHING newest-cni-20220604162348-5712 ...
	I0604 16:26:57.202384    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:58.279644    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:58.279699    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0770203s)
	W0604 16:26:58.279699    6008 stop.go:75] unable to get state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:58.279699    6008 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:58.296610    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:26:59.421578    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:59.421578    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1249559s)
	I0604 16:26:59.421578    6008 delete.go:82] Unable to get host status for newest-cni-20220604162348-5712, assuming it has already been deleted: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:26:59.429172    6008 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220604162348-5712
	W0604 16:27:00.509271    6008 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:00.509271    6008 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220604162348-5712: (1.0800866s)
	I0604 16:27:00.509271    6008 kic.go:356] could not find the container newest-cni-20220604162348-5712 to remove it. will try anyways
	I0604 16:27:00.516268    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:27:01.595345    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:01.595345    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0787495s)
	W0604 16:27:01.595345    6008 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:01.604723    6008 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220604162348-5712 /bin/bash -c "sudo init 0"
	W0604 16:27:02.720121    6008 cli_runner.go:211] docker exec --privileged -t newest-cni-20220604162348-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:27:02.720121    6008 cli_runner.go:217] Completed: docker exec --privileged -t newest-cni-20220604162348-5712 /bin/bash -c "sudo init 0": (1.1148699s)
	I0604 16:27:02.720121    6008 oci.go:625] error shutdown newest-cni-20220604162348-5712: docker exec --privileged -t newest-cni-20220604162348-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:03.739369    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:27:04.839774    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:04.839774    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.1003923s)
	I0604 16:27:04.839774    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:04.839774    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:27:04.839774    6008 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:05.344809    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:27:06.416277    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:06.416277    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0714561s)
	I0604 16:27:06.416277    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:06.416277    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:27:06.416277    6008 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:07.023078    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:27:08.083147    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:08.083147    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.060057s)
	I0604 16:27:08.083147    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:08.083147    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:27:08.083147    6008 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:08.989196    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:27:10.051532    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:10.051532    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0623242s)
	I0604 16:27:10.051532    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:10.051532    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:27:10.051532    6008 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:12.053416    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:27:13.128451    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:13.128451    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0746982s)
	I0604 16:27:13.128451    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:13.128451    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:27:13.128451    6008 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:14.964125    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:27:16.014878    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:16.014878    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.050742s)
	I0604 16:27:16.014878    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:16.014878    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:27:16.014878    6008 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:18.700729    6008 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:27:19.755184    6008 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:19.755184    6008 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (1.0544439s)
	I0604 16:27:19.755184    6008 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:19.755184    6008 oci.go:639] temporary error: container newest-cni-20220604162348-5712 status is  but expect it to be exited
	I0604 16:27:19.755184    6008 oci.go:88] couldn't shut down newest-cni-20220604162348-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	 
	I0604 16:27:19.765441    6008 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220604162348-5712
	I0604 16:27:20.829006    6008 cli_runner.go:217] Completed: docker rm -f -v newest-cni-20220604162348-5712: (1.063553s)
	I0604 16:27:20.836510    6008 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220604162348-5712
	W0604 16:27:21.915832    6008 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:21.915832    6008 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220604162348-5712: (1.0792594s)
	I0604 16:27:21.923106    6008 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:27:22.984521    6008 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:27:22.984521    6008 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0612287s)
	I0604 16:27:22.993289    6008 network_create.go:272] running [docker network inspect newest-cni-20220604162348-5712] to gather additional debugging logs...
	I0604 16:27:22.993289    6008 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712
	W0604 16:27:24.085373    6008 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:24.085373    6008 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712: (1.0918918s)
	I0604 16:27:24.085534    6008 network_create.go:275] error running [docker network inspect newest-cni-20220604162348-5712]: docker network inspect newest-cni-20220604162348-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220604162348-5712
	I0604 16:27:24.085534    6008 network_create.go:277] output of [docker network inspect newest-cni-20220604162348-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220604162348-5712
	
	** /stderr **
	W0604 16:27:24.086362    6008 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:27:24.086362    6008 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:27:25.101015    6008 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:27:25.105203    6008 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0604 16:27:25.105544    6008 start.go:165] libmachine.API.Create for "newest-cni-20220604162348-5712" (driver="docker")
	I0604 16:27:25.105544    6008 client.go:168] LocalClient.Create starting
	I0604 16:27:25.105681    6008 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:27:25.106257    6008 main.go:134] libmachine: Decoding PEM data...
	I0604 16:27:25.106257    6008 main.go:134] libmachine: Parsing certificate...
	I0604 16:27:25.106445    6008 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:27:25.106445    6008 main.go:134] libmachine: Decoding PEM data...
	I0604 16:27:25.106445    6008 main.go:134] libmachine: Parsing certificate...
	I0604 16:27:25.116274    6008 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:27:26.199836    6008 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:27:26.199836    6008 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0835504s)
	I0604 16:27:26.207832    6008 network_create.go:272] running [docker network inspect newest-cni-20220604162348-5712] to gather additional debugging logs...
	I0604 16:27:26.207832    6008 cli_runner.go:164] Run: docker network inspect newest-cni-20220604162348-5712
	W0604 16:27:27.303415    6008 cli_runner.go:211] docker network inspect newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:27.303415    6008 cli_runner.go:217] Completed: docker network inspect newest-cni-20220604162348-5712: (1.0955712s)
	I0604 16:27:27.303415    6008 network_create.go:275] error running [docker network inspect newest-cni-20220604162348-5712]: docker network inspect newest-cni-20220604162348-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220604162348-5712
	I0604 16:27:27.303415    6008 network_create.go:277] output of [docker network inspect newest-cni-20220604162348-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220604162348-5712
	
	** /stderr **
	I0604 16:27:27.312777    6008 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:27:28.414274    6008 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1014841s)
	I0604 16:27:28.431324    6008 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005c8888] amended:false}} dirty:map[] misses:0}
	I0604 16:27:28.431324    6008 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:27:28.448496    6008 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005c8888] amended:true}} dirty:map[192.168.49.0:0xc0005c8888 192.168.58.0:0xc000007038] misses:0}
	I0604 16:27:28.448496    6008 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:27:28.448496    6008 network_create.go:115] attempt to create docker network newest-cni-20220604162348-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:27:28.455658    6008 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712
	W0604 16:27:29.500395    6008 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:29.500455    6008 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: (1.0446238s)
	E0604 16:27:29.500455    6008 network_create.go:104] error while trying to create docker network newest-cni-20220604162348-5712 192.168.58.0/24: create docker network newest-cni-20220604162348-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1a5120de8c1d5bf2f755fda41ee0da100060bfbebc9d5330419f1dd6f4309ab7 (br-1a5120de8c1d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:27:29.500455    6008 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220604162348-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1a5120de8c1d5bf2f755fda41ee0da100060bfbebc9d5330419f1dd6f4309ab7 (br-1a5120de8c1d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220604162348-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1a5120de8c1d5bf2f755fda41ee0da100060bfbebc9d5330419f1dd6f4309ab7 (br-1a5120de8c1d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:27:29.518037    6008 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:27:30.568480    6008 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0502729s)
	I0604 16:27:30.576315    6008 cli_runner.go:164] Run: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:27:31.612500    6008 cli_runner.go:211] docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:27:31.612500    6008 cli_runner.go:217] Completed: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0360616s)
	I0604 16:27:31.612500    6008 client.go:171] LocalClient.Create took 6.5068076s
	I0604 16:27:33.629010    6008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:27:33.635967    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:27:34.727150    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:34.727150    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0911701s)
	I0604 16:27:34.727150    6008 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:35.011450    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:27:36.105507    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:36.105507    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0939618s)
	W0604 16:27:36.105507    6008 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:27:36.105507    6008 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:36.116955    6008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:27:36.125342    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:27:37.208963    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:37.208963    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0832972s)
	I0604 16:27:37.208963    6008 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:37.420629    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:27:38.512774    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:38.512813    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0919142s)
	W0604 16:27:38.512875    6008 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:27:38.512875    6008 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:38.512875    6008 start.go:134] duration metric: createHost completed in 13.4115039s
	I0604 16:27:38.524128    6008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:27:38.532446    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:27:39.612582    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:39.612582    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0801242s)
	I0604 16:27:39.612582    6008 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:39.940804    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:27:40.997680    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:40.997680    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.0567439s)
	W0604 16:27:40.997818    6008 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:27:40.997818    6008 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:41.008667    6008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:27:41.014500    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:27:42.058888    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:42.058888    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.044376s)
	I0604 16:27:42.058888    6008 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:42.418983    6008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712
	W0604 16:27:43.523853    6008 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712 returned with exit code 1
	I0604 16:27:43.523999    6008 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: (1.1048324s)
	W0604 16:27:43.523999    6008 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:27:43.523999    6008 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220604162348-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220604162348-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	I0604 16:27:43.523999    6008 fix.go:57] fixHost completed within 47.4149699s
	I0604 16:27:43.523999    6008 start.go:81] releasing machines lock for "newest-cni-20220604162348-5712", held for 47.4149699s
	W0604 16:27:43.525670    6008 out.go:239] * Failed to start docker container. Running "minikube delete -p newest-cni-20220604162348-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p newest-cni-20220604162348-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	
	I0604 16:27:43.550134    6008 out.go:177] 
	W0604 16:27:43.552966    6008 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220604162348-5712 container: docker volume create newest-cni-20220604162348-5712 --label name.minikube.sigs.k8s.io=newest-cni-20220604162348-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220604162348-5712: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220604162348-5712': mkdir /var/lib/docker/volumes/newest-cni-20220604162348-5712: read-only file system
	
	W0604 16:27:43.552966    6008 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:27:43.552966    6008 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:27:43.557249    6008 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p newest-cni-20220604162348-5712 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220604162348-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220604162348-5712: exit status 1 (1.1407902s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712: exit status 7 (2.9930165s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:27:47.887535    7024 status.go:247] status error: host: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220604162348-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (118.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20220604161407-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20220604161407-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 60 (1m17.5681938s)

                                                
                                                
-- stdout --
	* [calico-20220604161407-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node calico-20220604161407-5712 in cluster calico-20220604161407-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "calico-20220604161407-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:26:00.063384    3368 out.go:296] Setting OutFile to fd 964 ...
	I0604 16:26:00.121206    3368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:26:00.121206    3368 out.go:309] Setting ErrFile to fd 1444...
	I0604 16:26:00.122177    3368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:26:00.132933    3368 out.go:303] Setting JSON to false
	I0604 16:26:00.135641    3368 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11032,"bootTime":1654348928,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:26:00.135641    3368 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:26:00.139081    3368 out.go:177] * [calico-20220604161407-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:26:00.142713    3368 notify.go:193] Checking for updates...
	I0604 16:26:00.161731    3368 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:26:00.164896    3368 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:26:00.167685    3368 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:26:00.172835    3368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:26:00.175826    3368 config.go:178] Loaded profile config "cilium-20220604161407-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:26:00.175826    3368 config.go:178] Loaded profile config "default-k8s-different-port-20220604162205-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:26:00.176908    3368 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:26:00.177576    3368 config.go:178] Loaded profile config "newest-cni-20220604162348-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:26:00.177576    3368 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:26:02.890507    3368 docker.go:137] docker version: linux-20.10.16
	I0604 16:26:02.899058    3368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:26:05.030182    3368 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1308281s)
	I0604 16:26:05.031300    3368 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:26:04.0046008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:26:05.035033    3368 out.go:177] * Using the docker driver based on user configuration
	I0604 16:26:05.037222    3368 start.go:284] selected driver: docker
	I0604 16:26:05.037272    3368 start.go:806] validating driver "docker" against <nil>
	I0604 16:26:05.037331    3368 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:26:05.107297    3368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:26:07.210745    3368 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1032629s)
	I0604 16:26:07.210807    3368 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:26:06.1755008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:26:07.210807    3368 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:26:07.211914    3368 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:26:07.214958    3368 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:26:07.217406    3368 cni.go:95] Creating CNI manager for "calico"
	I0604 16:26:07.217406    3368 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0604 16:26:07.217406    3368 start_flags.go:306] config:
	{Name:calico-20220604161407-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220604161407-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:26:07.221094    3368 out.go:177] * Starting control plane node calico-20220604161407-5712 in cluster calico-20220604161407-5712
	I0604 16:26:07.222953    3368 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:26:07.224456    3368 out.go:177] * Pulling base image ...
	I0604 16:26:07.228223    3368 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:26:07.228223    3368 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:26:07.228474    3368 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:26:07.228532    3368 cache.go:57] Caching tarball of preloaded images
	I0604 16:26:07.228672    3368 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:26:07.229021    3368 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:26:07.229021    3368 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-20220604161407-5712\config.json ...
	I0604 16:26:07.229021    3368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-20220604161407-5712\config.json: {Name:mkdd58d9e637ff50f3609e1c3b09403a0332c1d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:26:08.318275    3368 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:26:08.318275    3368 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:26:08.318275    3368 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:26:08.318275    3368 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:26:08.318275    3368 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:26:08.318275    3368 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:26:08.318275    3368 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:26:08.318275    3368 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:26:08.318275    3368 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:26:10.715617    3368 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:26:10.715680    3368 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:26:10.715777    3368 start.go:352] acquiring machines lock for calico-20220604161407-5712: {Name:mk767966897d298e64728098805f401c8ef528cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:26:10.715998    3368 start.go:356] acquired machines lock for "calico-20220604161407-5712" in 80.3µs
	I0604 16:26:10.716306    3368 start.go:91] Provisioning new machine with config: &{Name:calico-20220604161407-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220604161407-5712 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:26:10.716332    3368 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:26:10.719560    3368 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:26:10.720544    3368 start.go:165] libmachine.API.Create for "calico-20220604161407-5712" (driver="docker")
	I0604 16:26:10.720544    3368 client.go:168] LocalClient.Create starting
	I0604 16:26:10.721569    3368 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:26:10.721661    3368 main.go:134] libmachine: Decoding PEM data...
	I0604 16:26:10.721661    3368 main.go:134] libmachine: Parsing certificate...
	I0604 16:26:10.721661    3368 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:26:10.721661    3368 main.go:134] libmachine: Decoding PEM data...
	I0604 16:26:10.721661    3368 main.go:134] libmachine: Parsing certificate...
	I0604 16:26:10.731613    3368 cli_runner.go:164] Run: docker network inspect calico-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:26:11.783749    3368 cli_runner.go:211] docker network inspect calico-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:26:11.783749    3368 cli_runner.go:217] Completed: docker network inspect calico-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0520977s)
	I0604 16:26:11.791870    3368 network_create.go:272] running [docker network inspect calico-20220604161407-5712] to gather additional debugging logs...
	I0604 16:26:11.791870    3368 cli_runner.go:164] Run: docker network inspect calico-20220604161407-5712
	W0604 16:26:12.873290    3368 cli_runner.go:211] docker network inspect calico-20220604161407-5712 returned with exit code 1
	I0604 16:26:12.873407    3368 cli_runner.go:217] Completed: docker network inspect calico-20220604161407-5712: (1.0813213s)
	I0604 16:26:12.873593    3368 network_create.go:275] error running [docker network inspect calico-20220604161407-5712]: docker network inspect calico-20220604161407-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220604161407-5712
	I0604 16:26:12.873650    3368 network_create.go:277] output of [docker network inspect calico-20220604161407-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220604161407-5712
	
	** /stderr **
	I0604 16:26:12.881730    3368 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:26:14.027397    3368 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1456542s)
	I0604 16:26:14.048185    3368 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000a7e188] misses:0}
	I0604 16:26:14.049095    3368 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:26:14.049095    3368 network_create.go:115] attempt to create docker network calico-20220604161407-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:26:14.056820    3368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220604161407-5712
	W0604 16:26:15.234269    3368 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220604161407-5712 returned with exit code 1
	I0604 16:26:15.234269    3368 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220604161407-5712: (1.1774365s)
	E0604 16:26:15.234269    3368 network_create.go:104] error while trying to create docker network calico-20220604161407-5712 192.168.49.0/24: create docker network calico-20220604161407-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3cc40fe8583a96959771c5f18346dbb49b8a4059d9f281162717d49b7e67fa37 (br-3cc40fe8583a): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:26:15.234269    3368 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220604161407-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3cc40fe8583a96959771c5f18346dbb49b8a4059d9f281162717d49b7e67fa37 (br-3cc40fe8583a): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220604161407-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3cc40fe8583a96959771c5f18346dbb49b8a4059d9f281162717d49b7e67fa37 (br-3cc40fe8583a): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:26:15.250212    3368 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:26:16.369403    3368 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1191782s)
	I0604 16:26:16.377326    3368 cli_runner.go:164] Run: docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:26:17.442527    3368 cli_runner.go:211] docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:26:17.442527    3368 cli_runner.go:217] Completed: docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0651893s)
	I0604 16:26:17.442527    3368 client.go:171] LocalClient.Create took 6.7219083s
	I0604 16:26:19.460877    3368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:26:19.466961    3368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712
	W0604 16:26:20.535992    3368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712 returned with exit code 1
	I0604 16:26:20.535992    3368 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: (1.0688655s)
	I0604 16:26:20.535992    3368 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:20.826741    3368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712
	W0604 16:26:21.934243    3368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712 returned with exit code 1
	I0604 16:26:21.934243    3368 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: (1.1073503s)
	W0604 16:26:21.934472    3368 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	
	W0604 16:26:21.934549    3368 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:21.948170    3368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:26:21.954818    3368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712
	W0604 16:26:23.069371    3368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712 returned with exit code 1
	I0604 16:26:23.069371    3368 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: (1.1143915s)
	I0604 16:26:23.069371    3368 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:23.373475    3368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712
	W0604 16:26:24.472809    3368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712 returned with exit code 1
	I0604 16:26:24.472875    3368 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: (1.0993222s)
	W0604 16:26:24.472875    3368 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	
	W0604 16:26:24.472875    3368 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:24.472875    3368 start.go:134] duration metric: createHost completed in 13.756391s
	I0604 16:26:24.472875    3368 start.go:81] releasing machines lock for "calico-20220604161407-5712", held for 13.7566917s
	W0604 16:26:24.472875    3368 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for calico-20220604161407-5712 container: docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/calico-20220604161407-5712': mkdir /var/lib/docker/volumes/calico-20220604161407-5712: read-only file system
	I0604 16:26:24.492134    3368 cli_runner.go:164] Run: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:25.539384    3368 cli_runner.go:211] docker container inspect calico-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:25.539384    3368 cli_runner.go:217] Completed: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: (1.0470317s)
	I0604 16:26:25.539505    3368 delete.go:82] Unable to get host status for calico-20220604161407-5712, assuming it has already been deleted: state: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	W0604 16:26:25.539505    3368 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for calico-20220604161407-5712 container: docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/calico-20220604161407-5712': mkdir /var/lib/docker/volumes/calico-20220604161407-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for calico-20220604161407-5712 container: docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/calico-20220604161407-5712': mkdir /var/lib/docker/volumes/calico-20220604161407-5712: read-only file system
	
	I0604 16:26:25.539505    3368 start.go:614] Will try again in 5 seconds ...
	I0604 16:26:30.552490    3368 start.go:352] acquiring machines lock for calico-20220604161407-5712: {Name:mk767966897d298e64728098805f401c8ef528cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:26:30.552490    3368 start.go:356] acquired machines lock for "calico-20220604161407-5712" in 0s
	I0604 16:26:30.552490    3368 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:26:30.552490    3368 fix.go:55] fixHost starting: 
	I0604 16:26:30.570944    3368 cli_runner.go:164] Run: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:31.647443    3368 cli_runner.go:211] docker container inspect calico-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:31.647443    3368 cli_runner.go:217] Completed: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: (1.0757678s)
	I0604 16:26:31.647443    3368 fix.go:103] recreateIfNeeded on calico-20220604161407-5712: state= err=unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:31.647443    3368 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:26:31.652485    3368 out.go:177] * docker "calico-20220604161407-5712" container is missing, will recreate.
	I0604 16:26:31.654458    3368 delete.go:124] DEMOLISHING calico-20220604161407-5712 ...
	I0604 16:26:31.669432    3368 cli_runner.go:164] Run: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:32.741240    3368 cli_runner.go:211] docker container inspect calico-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:32.741240    3368 cli_runner.go:217] Completed: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: (1.0717967s)
	W0604 16:26:32.741240    3368 stop.go:75] unable to get state: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:32.741240    3368 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:32.756251    3368 cli_runner.go:164] Run: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:33.844354    3368 cli_runner.go:211] docker container inspect calico-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:33.844354    3368 cli_runner.go:217] Completed: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: (1.0880901s)
	I0604 16:26:33.844354    3368 delete.go:82] Unable to get host status for calico-20220604161407-5712, assuming it has already been deleted: state: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:33.851308    3368 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-20220604161407-5712
	W0604 16:26:34.927965    3368 cli_runner.go:211] docker container inspect -f {{.Id}} calico-20220604161407-5712 returned with exit code 1
	I0604 16:26:34.927965    3368 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} calico-20220604161407-5712: (1.076645s)
	I0604 16:26:34.927965    3368 kic.go:356] could not find the container calico-20220604161407-5712 to remove it. will try anyways
	I0604 16:26:34.936378    3368 cli_runner.go:164] Run: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:36.021005    3368 cli_runner.go:211] docker container inspect calico-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:36.021005    3368 cli_runner.go:217] Completed: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: (1.084098s)
	W0604 16:26:36.021005    3368 oci.go:84] error getting container status, will try to delete anyways: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:36.028001    3368 cli_runner.go:164] Run: docker exec --privileged -t calico-20220604161407-5712 /bin/bash -c "sudo init 0"
	W0604 16:26:37.122851    3368 cli_runner.go:211] docker exec --privileged -t calico-20220604161407-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:26:37.122851    3368 cli_runner.go:217] Completed: docker exec --privileged -t calico-20220604161407-5712 /bin/bash -c "sudo init 0": (1.0948377s)
	I0604 16:26:37.122851    3368 oci.go:625] error shutdown calico-20220604161407-5712: docker exec --privileged -t calico-20220604161407-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:38.134332    3368 cli_runner.go:164] Run: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:39.254079    3368 cli_runner.go:211] docker container inspect calico-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:39.254079    3368 cli_runner.go:217] Completed: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: (1.1197344s)
	I0604 16:26:39.254079    3368 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:39.254079    3368 oci.go:639] temporary error: container calico-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:39.254079    3368 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:39.737168    3368 cli_runner.go:164] Run: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:40.786900    3368 cli_runner.go:211] docker container inspect calico-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:40.786900    3368 cli_runner.go:217] Completed: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: (1.04972s)
	I0604 16:26:40.786900    3368 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:40.786900    3368 oci.go:639] temporary error: container calico-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:40.786900    3368 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:41.697601    3368 cli_runner.go:164] Run: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:42.769819    3368 cli_runner.go:211] docker container inspect calico-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:42.769819    3368 cli_runner.go:217] Completed: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: (1.0722058s)
	I0604 16:26:42.769819    3368 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:42.769819    3368 oci.go:639] temporary error: container calico-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:42.769819    3368 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:43.413908    3368 cli_runner.go:164] Run: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:44.487060    3368 cli_runner.go:211] docker container inspect calico-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:44.487060    3368 cli_runner.go:217] Completed: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: (1.073141s)
	I0604 16:26:44.487060    3368 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:44.487060    3368 oci.go:639] temporary error: container calico-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:44.487060    3368 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:45.614651    3368 cli_runner.go:164] Run: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:46.688219    3368 cli_runner.go:211] docker container inspect calico-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:46.688219    3368 cli_runner.go:217] Completed: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: (1.0735567s)
	I0604 16:26:46.688219    3368 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:46.688219    3368 oci.go:639] temporary error: container calico-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:46.688219    3368 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:48.218235    3368 cli_runner.go:164] Run: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:49.255154    3368 cli_runner.go:211] docker container inspect calico-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:49.255154    3368 cli_runner.go:217] Completed: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: (1.0369066s)
	I0604 16:26:49.255154    3368 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:49.255154    3368 oci.go:639] temporary error: container calico-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:49.255154    3368 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:52.317590    3368 cli_runner.go:164] Run: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}
	W0604 16:26:53.393014    3368 cli_runner.go:211] docker container inspect calico-20220604161407-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:53.393164    3368 cli_runner.go:217] Completed: docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: (1.0754122s)
	I0604 16:26:53.393233    3368 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:26:53.393233    3368 oci.go:639] temporary error: container calico-20220604161407-5712 status is  but expect it to be exited
	I0604 16:26:53.393295    3368 oci.go:88] couldn't shut down calico-20220604161407-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "calico-20220604161407-5712": docker container inspect calico-20220604161407-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	 
	I0604 16:26:53.400110    3368 cli_runner.go:164] Run: docker rm -f -v calico-20220604161407-5712
	I0604 16:26:54.457472    3368 cli_runner.go:217] Completed: docker rm -f -v calico-20220604161407-5712: (1.05735s)
	I0604 16:26:54.465563    3368 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-20220604161407-5712
	W0604 16:26:55.553628    3368 cli_runner.go:211] docker container inspect -f {{.Id}} calico-20220604161407-5712 returned with exit code 1
	I0604 16:26:55.553628    3368 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} calico-20220604161407-5712: (1.0875274s)
	I0604 16:26:55.562357    3368 cli_runner.go:164] Run: docker network inspect calico-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:26:56.621142    3368 cli_runner.go:211] docker network inspect calico-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:26:56.621142    3368 cli_runner.go:217] Completed: docker network inspect calico-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0587733s)
	I0604 16:26:56.629139    3368 network_create.go:272] running [docker network inspect calico-20220604161407-5712] to gather additional debugging logs...
	I0604 16:26:56.629139    3368 cli_runner.go:164] Run: docker network inspect calico-20220604161407-5712
	W0604 16:26:57.692340    3368 cli_runner.go:211] docker network inspect calico-20220604161407-5712 returned with exit code 1
	I0604 16:26:57.692340    3368 cli_runner.go:217] Completed: docker network inspect calico-20220604161407-5712: (1.0631891s)
	I0604 16:26:57.692340    3368 network_create.go:275] error running [docker network inspect calico-20220604161407-5712]: docker network inspect calico-20220604161407-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220604161407-5712
	I0604 16:26:57.692340    3368 network_create.go:277] output of [docker network inspect calico-20220604161407-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220604161407-5712
	
	** /stderr **
	W0604 16:26:57.693343    3368 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:26:57.693343    3368 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:26:58.705043    3368 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:26:58.709603    3368 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:26:58.709603    3368 start.go:165] libmachine.API.Create for "calico-20220604161407-5712" (driver="docker")
	I0604 16:26:58.709603    3368 client.go:168] LocalClient.Create starting
	I0604 16:26:58.710298    3368 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:26:58.710298    3368 main.go:134] libmachine: Decoding PEM data...
	I0604 16:26:58.710298    3368 main.go:134] libmachine: Parsing certificate...
	I0604 16:26:58.710888    3368 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:26:58.710888    3368 main.go:134] libmachine: Decoding PEM data...
	I0604 16:26:58.710888    3368 main.go:134] libmachine: Parsing certificate...
	I0604 16:26:58.719793    3368 cli_runner.go:164] Run: docker network inspect calico-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:26:59.827137    3368 cli_runner.go:211] docker network inspect calico-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:26:59.827137    3368 cli_runner.go:217] Completed: docker network inspect calico-20220604161407-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1073321s)
	I0604 16:26:59.836135    3368 network_create.go:272] running [docker network inspect calico-20220604161407-5712] to gather additional debugging logs...
	I0604 16:26:59.836135    3368 cli_runner.go:164] Run: docker network inspect calico-20220604161407-5712
	W0604 16:27:00.977853    3368 cli_runner.go:211] docker network inspect calico-20220604161407-5712 returned with exit code 1
	I0604 16:27:00.978031    3368 cli_runner.go:217] Completed: docker network inspect calico-20220604161407-5712: (1.1417051s)
	I0604 16:27:00.978077    3368 network_create.go:275] error running [docker network inspect calico-20220604161407-5712]: docker network inspect calico-20220604161407-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220604161407-5712
	I0604 16:27:00.978183    3368 network_create.go:277] output of [docker network inspect calico-20220604161407-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220604161407-5712
	
	** /stderr **
	I0604 16:27:00.986570    3368 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:27:02.050295    3368 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0637129s)
	I0604 16:27:02.066210    3368 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a7e188] amended:false}} dirty:map[] misses:0}
	I0604 16:27:02.066210    3368 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:27:02.081199    3368 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a7e188] amended:true}} dirty:map[192.168.49.0:0xc000a7e188 192.168.58.0:0xc0001269a0] misses:0}
	I0604 16:27:02.081199    3368 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:27:02.081199    3368 network_create.go:115] attempt to create docker network calico-20220604161407-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:27:02.088196    3368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220604161407-5712
	W0604 16:27:03.209773    3368 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220604161407-5712 returned with exit code 1
	I0604 16:27:03.209773    3368 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220604161407-5712: (1.1214697s)
	E0604 16:27:03.209773    3368 network_create.go:104] error while trying to create docker network calico-20220604161407-5712 192.168.58.0/24: create docker network calico-20220604161407-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3e9273d1e683c043566e88d9137fede76cadd0ebdf2eb82605284e9d60ca4141 (br-3e9273d1e683): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:27:03.209773    3368 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220604161407-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3e9273d1e683c043566e88d9137fede76cadd0ebdf2eb82605284e9d60ca4141 (br-3e9273d1e683): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220604161407-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220604161407-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3e9273d1e683c043566e88d9137fede76cadd0ebdf2eb82605284e9d60ca4141 (br-3e9273d1e683): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:27:03.224872    3368 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:27:04.338439    3368 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1135543s)
	I0604 16:27:04.346733    3368 cli_runner.go:164] Run: docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:27:05.434810    3368 cli_runner.go:211] docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:27:05.434810    3368 cli_runner.go:217] Completed: docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: (1.088065s)
	I0604 16:27:05.434810    3368 client.go:171] LocalClient.Create took 6.7251327s
	I0604 16:27:07.446158    3368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:27:07.452662    3368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712
	W0604 16:27:08.536882    3368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712 returned with exit code 1
	I0604 16:27:08.536882    3368 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: (1.0842077s)
	I0604 16:27:08.536882    3368 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:27:08.880526    3368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712
	W0604 16:27:09.973613    3368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712 returned with exit code 1
	I0604 16:27:09.973613    3368 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: (1.0930749s)
	W0604 16:27:09.973613    3368 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	
	W0604 16:27:09.973613    3368 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:27:09.986550    3368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:27:09.993556    3368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712
	W0604 16:27:11.073929    3368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712 returned with exit code 1
	I0604 16:27:11.073929    3368 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: (1.0803617s)
	I0604 16:27:11.073929    3368 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:27:11.305852    3368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712
	W0604 16:27:12.451166    3368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712 returned with exit code 1
	I0604 16:27:12.451166    3368 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: (1.1449565s)
	W0604 16:27:12.451326    3368 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	
	W0604 16:27:12.451326    3368 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:27:12.451326    3368 start.go:134] duration metric: createHost completed in 13.7459379s
	I0604 16:27:12.461614    3368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:27:12.468549    3368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712
	W0604 16:27:13.552223    3368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712 returned with exit code 1
	I0604 16:27:13.552293    3368 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: (1.0835791s)
	I0604 16:27:13.552441    3368 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:27:13.812784    3368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712
	W0604 16:27:14.909324    3368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712 returned with exit code 1
	I0604 16:27:14.909324    3368 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: (1.096527s)
	W0604 16:27:14.909324    3368 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	
	W0604 16:27:14.909324    3368 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:27:14.918696    3368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:27:14.927070    3368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712
	W0604 16:27:16.046915    3368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712 returned with exit code 1
	I0604 16:27:16.046915    3368 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: (1.119832s)
	I0604 16:27:16.046915    3368 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:27:16.260215    3368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712
	W0604 16:27:17.340982    3368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712 returned with exit code 1
	I0604 16:27:17.340982    3368 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: (1.0806605s)
	W0604 16:27:17.340982    3368 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	
	W0604 16:27:17.340982    3368 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220604161407-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220604161407-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220604161407-5712
	I0604 16:27:17.341498    3368 fix.go:57] fixHost completed within 46.7884882s
	I0604 16:27:17.341555    3368 start.go:81] releasing machines lock for "calico-20220604161407-5712", held for 46.7885455s
	W0604 16:27:17.341648    3368 out.go:239] * Failed to start docker container. Running "minikube delete -p calico-20220604161407-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220604161407-5712 container: docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/calico-20220604161407-5712': mkdir /var/lib/docker/volumes/calico-20220604161407-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p calico-20220604161407-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220604161407-5712 container: docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/calico-20220604161407-5712': mkdir /var/lib/docker/volumes/calico-20220604161407-5712: read-only file system
	
	I0604 16:27:17.346199    3368 out.go:177] 
	W0604 16:27:17.348617    3368 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220604161407-5712 container: docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/calico-20220604161407-5712': mkdir /var/lib/docker/volumes/calico-20220604161407-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220604161407-5712 container: docker volume create calico-20220604161407-5712 --label name.minikube.sigs.k8s.io=calico-20220604161407-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220604161407-5712: error while creating volume root path '/var/lib/docker/volumes/calico-20220604161407-5712': mkdir /var/lib/docker/volumes/calico-20220604161407-5712: read-only file system
	
	W0604 16:27:17.348757    3368 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:27:17.348757    3368 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:27:17.352640    3368 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/calico/Start (77.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220604162205-5712" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220604162205-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220604162205-5712: exit status 1 (1.0513948s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (2.9102475s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:26:22.801902    9180 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220604162205-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (3.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (4.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220604162205-5712" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220604162205-5712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220604162205-5712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (245.9389ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220604162205-5712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-different-port-20220604162205-5712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220604162205-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220604162205-5712: exit status 1 (1.0958356s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (2.8227691s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:26:26.982150    6056 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220604162205-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (4.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220604162205-5712 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220604162205-5712 "sudo crictl images -o json": exit status 80 (3.1936852s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_6.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220604162205-5712 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220604162205-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220604162205-5712: exit status 1 (1.1521742s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (2.9593691s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:26:34.298227    1892 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220604162205-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220604162205-5712 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220604162205-5712 --alsologtostderr -v=1: exit status 80 (3.3222146s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:26:34.565171    6168 out.go:296] Setting OutFile to fd 1380 ...
	I0604 16:26:34.623171    6168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:26:34.623171    6168 out.go:309] Setting ErrFile to fd 1748...
	I0604 16:26:34.623171    6168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:26:34.634167    6168 out.go:303] Setting JSON to false
	I0604 16:26:34.634167    6168 mustload.go:65] Loading cluster: default-k8s-different-port-20220604162205-5712
	I0604 16:26:34.635172    6168 config.go:178] Loaded profile config "default-k8s-different-port-20220604162205-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:26:34.652167    6168 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}
	W0604 16:26:37.342234    6168 cli_runner.go:211] docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:26:37.342234    6168 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: (2.6898557s)
	I0604 16:26:37.345030    6168 out.go:177] 
	W0604 16:26:37.348427    6168 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712
	
	W0604 16:26:37.348427    6168 out.go:239] * 
	* 
	W0604 16:26:37.612728    6168 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_12.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_12.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 16:26:37.615405    6168 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220604162205-5712 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220604162205-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220604162205-5712: exit status 1 (1.1751739s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (2.9488715s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:26:41.753078    6516 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220604162205-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220604162205-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220604162205-5712: exit status 1 (1.1193111s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220604162205-5712 -n default-k8s-different-port-20220604162205-5712: exit status 7 (2.9419666s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:26:45.823304    6244 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220604162205-5712": docker container inspect default-k8s-different-port-20220604162205-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220604162205-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220604162205-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (11.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (77.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20220604161400-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p false-20220604161400-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: exit status 60 (1m17.3640636s)

                                                
                                                
-- stdout --
	* [false-20220604161400-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node false-20220604161400-5712 in cluster false-20220604161400-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "false-20220604161400-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:27:02.201189    6672 out.go:296] Setting OutFile to fd 1380 ...
	I0604 16:27:02.259189    6672 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:27:02.259189    6672 out.go:309] Setting ErrFile to fd 1748...
	I0604 16:27:02.259189    6672 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:27:02.270191    6672 out.go:303] Setting JSON to false
	I0604 16:27:02.272186    6672 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11094,"bootTime":1654348928,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:27:02.272186    6672 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:27:02.278186    6672 out.go:177] * [false-20220604161400-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:27:02.281202    6672 notify.go:193] Checking for updates...
	I0604 16:27:02.285186    6672 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:27:02.287186    6672 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:27:02.290186    6672 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:27:02.294186    6672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:27:02.298383    6672 config.go:178] Loaded profile config "calico-20220604161407-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:02.299075    6672 config.go:178] Loaded profile config "cilium-20220604161407-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:02.299075    6672 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:02.299716    6672 config.go:178] Loaded profile config "newest-cni-20220604162348-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:02.299716    6672 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:27:05.043616    6672 docker.go:137] docker version: linux-20.10.16
	I0604 16:27:05.051838    6672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:27:07.125931    6672 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0740703s)
	I0604 16:27:07.126699    6672 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:27:06.1242455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:27:07.129573    6672 out.go:177] * Using the docker driver based on user configuration
	I0604 16:27:07.133035    6672 start.go:284] selected driver: docker
	I0604 16:27:07.133035    6672 start.go:806] validating driver "docker" against <nil>
	I0604 16:27:07.133035    6672 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:27:07.210332    6672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:27:09.310896    6672 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1002756s)
	I0604 16:27:09.310896    6672 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:27:08.2880037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:27:09.310896    6672 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:27:09.311563    6672 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:27:09.314753    6672 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:27:09.317265    6672 cni.go:95] Creating CNI manager for "false"
	I0604 16:27:09.317265    6672 start_flags.go:306] config:
	{Name:false-20220604161400-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:false-20220604161400-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:27:09.319850    6672 out.go:177] * Starting control plane node false-20220604161400-5712 in cluster false-20220604161400-5712
	I0604 16:27:09.323749    6672 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:27:09.326921    6672 out.go:177] * Pulling base image ...
	I0604 16:27:09.330162    6672 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:27:09.330162    6672 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:27:09.330162    6672 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:27:09.330162    6672 cache.go:57] Caching tarball of preloaded images
	I0604 16:27:09.330679    6672 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:27:09.330823    6672 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:27:09.330823    6672 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-20220604161400-5712\config.json ...
	I0604 16:27:09.331348    6672 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-20220604161400-5712\config.json: {Name:mk7caa26dcc55704808f9de3f619af826f8893d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:27:10.428055    6672 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:27:10.428241    6672 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:27:10.428293    6672 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:27:10.428293    6672 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:27:10.428293    6672 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:27:10.428293    6672 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:27:10.428833    6672 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:27:10.428939    6672 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:27:10.428969    6672 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:27:12.793537    6672 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:27:12.793537    6672 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:27:12.793537    6672 start.go:352] acquiring machines lock for false-20220604161400-5712: {Name:mkbab35df702c1a2e8d7d50a3ec53c1f1cd4ed99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:27:12.793537    6672 start.go:356] acquired machines lock for "false-20220604161400-5712" in 0s
	I0604 16:27:12.794158    6672 start.go:91] Provisioning new machine with config: &{Name:false-20220604161400-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:false-20220604161400-5712 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:27:12.794158    6672 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:27:12.799007    6672 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:27:12.799007    6672 start.go:165] libmachine.API.Create for "false-20220604161400-5712" (driver="docker")
	I0604 16:27:12.799007    6672 client.go:168] LocalClient.Create starting
	I0604 16:27:12.799811    6672 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:27:12.799811    6672 main.go:134] libmachine: Decoding PEM data...
	I0604 16:27:12.799811    6672 main.go:134] libmachine: Parsing certificate...
	I0604 16:27:12.800531    6672 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:27:12.800581    6672 main.go:134] libmachine: Decoding PEM data...
	I0604 16:27:12.800581    6672 main.go:134] libmachine: Parsing certificate...
	I0604 16:27:12.813382    6672 cli_runner.go:164] Run: docker network inspect false-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:27:13.914351    6672 cli_runner.go:211] docker network inspect false-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:27:13.914351    6672 cli_runner.go:217] Completed: docker network inspect false-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1009569s)
	I0604 16:27:13.920351    6672 network_create.go:272] running [docker network inspect false-20220604161400-5712] to gather additional debugging logs...
	I0604 16:27:13.920351    6672 cli_runner.go:164] Run: docker network inspect false-20220604161400-5712
	W0604 16:27:14.972326    6672 cli_runner.go:211] docker network inspect false-20220604161400-5712 returned with exit code 1
	I0604 16:27:14.972326    6672 cli_runner.go:217] Completed: docker network inspect false-20220604161400-5712: (1.0519634s)
	I0604 16:27:14.972432    6672 network_create.go:275] error running [docker network inspect false-20220604161400-5712]: docker network inspect false-20220604161400-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220604161400-5712
	I0604 16:27:14.972637    6672 network_create.go:277] output of [docker network inspect false-20220604161400-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220604161400-5712
	
	** /stderr **
	I0604 16:27:14.986874    6672 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:27:15.999861    6672 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0129761s)
	I0604 16:27:16.019885    6672 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00052a0c0] misses:0}
	I0604 16:27:16.019885    6672 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:27:16.019885    6672 network_create.go:115] attempt to create docker network false-20220604161400-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:27:16.026871    6672 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220604161400-5712
	W0604 16:27:17.092112    6672 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220604161400-5712 returned with exit code 1
	I0604 16:27:17.092402    6672 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220604161400-5712: (1.0652291s)
	E0604 16:27:17.092490    6672 network_create.go:104] error while trying to create docker network false-20220604161400-5712 192.168.49.0/24: create docker network false-20220604161400-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220604161400-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 96128856f704d470f5b7a7c3034657054475065429086986f5d9b7af01420271 (br-96128856f704): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:27:17.092730    6672 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220604161400-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220604161400-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 96128856f704d470f5b7a7c3034657054475065429086986f5d9b7af01420271 (br-96128856f704): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220604161400-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220604161400-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 96128856f704d470f5b7a7c3034657054475065429086986f5d9b7af01420271 (br-96128856f704): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:27:17.107052    6672 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:27:18.222503    6672 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1153865s)
	I0604 16:27:18.231789    6672 cli_runner.go:164] Run: docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:27:19.362600    6672 cli_runner.go:211] docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:27:19.362600    6672 cli_runner.go:217] Completed: docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: (1.130798s)
	I0604 16:27:19.362600    6672 client.go:171] LocalClient.Create took 6.5635199s
	I0604 16:27:21.383042    6672 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:27:21.389042    6672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712
	W0604 16:27:22.479040    6672 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712 returned with exit code 1
	I0604 16:27:22.479040    6672 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: (1.0899859s)
	I0604 16:27:22.479040    6672 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:22.763046    6672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712
	W0604 16:27:23.868293    6672 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712 returned with exit code 1
	I0604 16:27:23.868437    6672 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: (1.1051967s)
	W0604 16:27:23.868437    6672 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	
	W0604 16:27:23.868437    6672 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:23.879517    6672 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:27:23.888000    6672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712
	W0604 16:27:24.956992    6672 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712 returned with exit code 1
	I0604 16:27:24.956992    6672 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: (1.0689795s)
	I0604 16:27:24.956992    6672 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:25.256798    6672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712
	W0604 16:27:26.356665    6672 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712 returned with exit code 1
	I0604 16:27:26.356665    6672 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: (1.0996834s)
	W0604 16:27:26.356922    6672 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	
	W0604 16:27:26.356922    6672 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:26.356922    6672 start.go:134] duration metric: createHost completed in 13.5626135s
	I0604 16:27:26.356922    6672 start.go:81] releasing machines lock for "false-20220604161400-5712", held for 13.5627062s
	W0604 16:27:26.356922    6672 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for false-20220604161400-5712 container: docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/false-20220604161400-5712': mkdir /var/lib/docker/volumes/false-20220604161400-5712: read-only file system
	I0604 16:27:26.375482    6672 cli_runner.go:164] Run: docker container inspect false-20220604161400-5712 --format={{.State.Status}}
	W0604 16:27:27.504364    6672 cli_runner.go:211] docker container inspect false-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:27.504364    6672 cli_runner.go:217] Completed: docker container inspect false-20220604161400-5712 --format={{.State.Status}}: (1.1287989s)
	I0604 16:27:27.504364    6672 delete.go:82] Unable to get host status for false-20220604161400-5712, assuming it has already been deleted: state: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	W0604 16:27:27.504364    6672 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for false-20220604161400-5712 container: docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/false-20220604161400-5712': mkdir /var/lib/docker/volumes/false-20220604161400-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for false-20220604161400-5712 container: docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/false-20220604161400-5712': mkdir /var/lib/docker/volumes/false-20220604161400-5712: read-only file system
	
	I0604 16:27:27.504364    6672 start.go:614] Will try again in 5 seconds ...
	I0604 16:27:32.513543    6672 start.go:352] acquiring machines lock for false-20220604161400-5712: {Name:mkbab35df702c1a2e8d7d50a3ec53c1f1cd4ed99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:27:32.513932    6672 start.go:356] acquired machines lock for "false-20220604161400-5712" in 207µs
	I0604 16:27:32.514107    6672 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:27:32.514197    6672 fix.go:55] fixHost starting: 
	I0604 16:27:32.532718    6672 cli_runner.go:164] Run: docker container inspect false-20220604161400-5712 --format={{.State.Status}}
	W0604 16:27:33.618985    6672 cli_runner.go:211] docker container inspect false-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:33.618985    6672 cli_runner.go:217] Completed: docker container inspect false-20220604161400-5712 --format={{.State.Status}}: (1.0852448s)
	I0604 16:27:33.618985    6672 fix.go:103] recreateIfNeeded on false-20220604161400-5712: state= err=unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:33.618985    6672 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:27:33.629010    6672 out.go:177] * docker "false-20220604161400-5712" container is missing, will recreate.
	I0604 16:27:33.630974    6672 delete.go:124] DEMOLISHING false-20220604161400-5712 ...
	I0604 16:27:33.646969    6672 cli_runner.go:164] Run: docker container inspect false-20220604161400-5712 --format={{.State.Status}}
	W0604 16:27:34.743306    6672 cli_runner.go:211] docker container inspect false-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:34.743446    6672 cli_runner.go:217] Completed: docker container inspect false-20220604161400-5712 --format={{.State.Status}}: (1.0961572s)
	W0604 16:27:34.743446    6672 stop.go:75] unable to get state: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:34.743446    6672 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:34.759621    6672 cli_runner.go:164] Run: docker container inspect false-20220604161400-5712 --format={{.State.Status}}
	W0604 16:27:35.837822    6672 cli_runner.go:211] docker container inspect false-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:35.837822    6672 cli_runner.go:217] Completed: docker container inspect false-20220604161400-5712 --format={{.State.Status}}: (1.0781889s)
	I0604 16:27:35.837822    6672 delete.go:82] Unable to get host status for false-20220604161400-5712, assuming it has already been deleted: state: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:35.846382    6672 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-20220604161400-5712
	W0604 16:27:36.931627    6672 cli_runner.go:211] docker container inspect -f {{.Id}} false-20220604161400-5712 returned with exit code 1
	I0604 16:27:36.931627    6672 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} false-20220604161400-5712: (1.085233s)
	I0604 16:27:36.931627    6672 kic.go:356] could not find the container false-20220604161400-5712 to remove it. will try anyways
	I0604 16:27:36.940620    6672 cli_runner.go:164] Run: docker container inspect false-20220604161400-5712 --format={{.State.Status}}
	W0604 16:27:38.014813    6672 cli_runner.go:211] docker container inspect false-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:38.014813    6672 cli_runner.go:217] Completed: docker container inspect false-20220604161400-5712 --format={{.State.Status}}: (1.0741803s)
	W0604 16:27:38.014813    6672 oci.go:84] error getting container status, will try to delete anyways: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:38.021766    6672 cli_runner.go:164] Run: docker exec --privileged -t false-20220604161400-5712 /bin/bash -c "sudo init 0"
	W0604 16:27:39.107985    6672 cli_runner.go:211] docker exec --privileged -t false-20220604161400-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:27:39.107985    6672 cli_runner.go:217] Completed: docker exec --privileged -t false-20220604161400-5712 /bin/bash -c "sudo init 0": (1.0862063s)
	I0604 16:27:39.107985    6672 oci.go:625] error shutdown false-20220604161400-5712: docker exec --privileged -t false-20220604161400-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:40.117935    6672 cli_runner.go:164] Run: docker container inspect false-20220604161400-5712 --format={{.State.Status}}
	W0604 16:27:41.199451    6672 cli_runner.go:211] docker container inspect false-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:41.199451    6672 cli_runner.go:217] Completed: docker container inspect false-20220604161400-5712 --format={{.State.Status}}: (1.081384s)
	I0604 16:27:41.199451    6672 oci.go:637] temporary error verifying shutdown: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:41.199451    6672 oci.go:639] temporary error: container false-20220604161400-5712 status is  but expect it to be exited
	I0604 16:27:41.199451    6672 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:41.684456    6672 cli_runner.go:164] Run: docker container inspect false-20220604161400-5712 --format={{.State.Status}}
	W0604 16:27:42.757462    6672 cli_runner.go:211] docker container inspect false-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:42.757462    6672 cli_runner.go:217] Completed: docker container inspect false-20220604161400-5712 --format={{.State.Status}}: (1.0729939s)
	I0604 16:27:42.757462    6672 oci.go:637] temporary error verifying shutdown: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:42.757462    6672 oci.go:639] temporary error: container false-20220604161400-5712 status is  but expect it to be exited
	I0604 16:27:42.757462    6672 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:43.669471    6672 cli_runner.go:164] Run: docker container inspect false-20220604161400-5712 --format={{.State.Status}}
	W0604 16:27:44.753422    6672 cli_runner.go:211] docker container inspect false-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:44.753422    6672 cli_runner.go:217] Completed: docker container inspect false-20220604161400-5712 --format={{.State.Status}}: (1.0839382s)
	I0604 16:27:44.753422    6672 oci.go:637] temporary error verifying shutdown: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:44.753422    6672 oci.go:639] temporary error: container false-20220604161400-5712 status is  but expect it to be exited
	I0604 16:27:44.753422    6672 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:45.402188    6672 cli_runner.go:164] Run: docker container inspect false-20220604161400-5712 --format={{.State.Status}}
	W0604 16:27:46.507903    6672 cli_runner.go:211] docker container inspect false-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:46.508045    6672 cli_runner.go:217] Completed: docker container inspect false-20220604161400-5712 --format={{.State.Status}}: (1.105524s)
	I0604 16:27:46.508045    6672 oci.go:637] temporary error verifying shutdown: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:46.508045    6672 oci.go:639] temporary error: container false-20220604161400-5712 status is  but expect it to be exited
	I0604 16:27:46.508045    6672 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:47.627920    6672 cli_runner.go:164] Run: docker container inspect false-20220604161400-5712 --format={{.State.Status}}
	W0604 16:27:48.707787    6672 cli_runner.go:211] docker container inspect false-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:48.707883    6672 cli_runner.go:217] Completed: docker container inspect false-20220604161400-5712 --format={{.State.Status}}: (1.0796467s)
	I0604 16:27:48.707989    6672 oci.go:637] temporary error verifying shutdown: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:48.708036    6672 oci.go:639] temporary error: container false-20220604161400-5712 status is  but expect it to be exited
	I0604 16:27:48.708128    6672 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:50.232070    6672 cli_runner.go:164] Run: docker container inspect false-20220604161400-5712 --format={{.State.Status}}
	W0604 16:27:51.310488    6672 cli_runner.go:211] docker container inspect false-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:51.310488    6672 cli_runner.go:217] Completed: docker container inspect false-20220604161400-5712 --format={{.State.Status}}: (1.0784065s)
	I0604 16:27:51.310488    6672 oci.go:637] temporary error verifying shutdown: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:51.310488    6672 oci.go:639] temporary error: container false-20220604161400-5712 status is  but expect it to be exited
	I0604 16:27:51.310488    6672 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:54.371896    6672 cli_runner.go:164] Run: docker container inspect false-20220604161400-5712 --format={{.State.Status}}
	W0604 16:27:55.475611    6672 cli_runner.go:211] docker container inspect false-20220604161400-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:55.475611    6672 cli_runner.go:217] Completed: docker container inspect false-20220604161400-5712 --format={{.State.Status}}: (1.1037027s)
	I0604 16:27:55.475611    6672 oci.go:637] temporary error verifying shutdown: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:27:55.475611    6672 oci.go:639] temporary error: container false-20220604161400-5712 status is  but expect it to be exited
	I0604 16:27:55.475908    6672 oci.go:88] couldn't shut down false-20220604161400-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "false-20220604161400-5712": docker container inspect false-20220604161400-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	 
	I0604 16:27:55.481911    6672 cli_runner.go:164] Run: docker rm -f -v false-20220604161400-5712
	I0604 16:27:56.549405    6672 cli_runner.go:217] Completed: docker rm -f -v false-20220604161400-5712: (1.067237s)
	I0604 16:27:56.559880    6672 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-20220604161400-5712
	W0604 16:27:57.592679    6672 cli_runner.go:211] docker container inspect -f {{.Id}} false-20220604161400-5712 returned with exit code 1
	I0604 16:27:57.592759    6672 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} false-20220604161400-5712: (1.0327538s)
	I0604 16:27:57.599886    6672 cli_runner.go:164] Run: docker network inspect false-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:27:58.670867    6672 cli_runner.go:211] docker network inspect false-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:27:58.671001    6672 cli_runner.go:217] Completed: docker network inspect false-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0709695s)
	I0604 16:27:58.678144    6672 network_create.go:272] running [docker network inspect false-20220604161400-5712] to gather additional debugging logs...
	I0604 16:27:58.679110    6672 cli_runner.go:164] Run: docker network inspect false-20220604161400-5712
	W0604 16:27:59.742917    6672 cli_runner.go:211] docker network inspect false-20220604161400-5712 returned with exit code 1
	I0604 16:27:59.742917    6672 cli_runner.go:217] Completed: docker network inspect false-20220604161400-5712: (1.0637124s)
	I0604 16:27:59.742917    6672 network_create.go:275] error running [docker network inspect false-20220604161400-5712]: docker network inspect false-20220604161400-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220604161400-5712
	I0604 16:27:59.742917    6672 network_create.go:277] output of [docker network inspect false-20220604161400-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220604161400-5712
	
	** /stderr **
	W0604 16:27:59.743885    6672 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:27:59.743885    6672 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:28:00.744130    6672 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:28:00.747087    6672 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:28:00.747721    6672 start.go:165] libmachine.API.Create for "false-20220604161400-5712" (driver="docker")
	I0604 16:28:00.747721    6672 client.go:168] LocalClient.Create starting
	I0604 16:28:00.747721    6672 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:28:00.748462    6672 main.go:134] libmachine: Decoding PEM data...
	I0604 16:28:00.748528    6672 main.go:134] libmachine: Parsing certificate...
	I0604 16:28:00.748583    6672 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:28:00.748583    6672 main.go:134] libmachine: Decoding PEM data...
	I0604 16:28:00.748583    6672 main.go:134] libmachine: Parsing certificate...
	I0604 16:28:00.758900    6672 cli_runner.go:164] Run: docker network inspect false-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:28:01.856350    6672 cli_runner.go:211] docker network inspect false-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:28:01.856350    6672 cli_runner.go:217] Completed: docker network inspect false-20220604161400-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0974381s)
	I0604 16:28:01.864593    6672 network_create.go:272] running [docker network inspect false-20220604161400-5712] to gather additional debugging logs...
	I0604 16:28:01.864593    6672 cli_runner.go:164] Run: docker network inspect false-20220604161400-5712
	W0604 16:28:02.950564    6672 cli_runner.go:211] docker network inspect false-20220604161400-5712 returned with exit code 1
	I0604 16:28:02.950564    6672 cli_runner.go:217] Completed: docker network inspect false-20220604161400-5712: (1.0857719s)
	I0604 16:28:02.950619    6672 network_create.go:275] error running [docker network inspect false-20220604161400-5712]: docker network inspect false-20220604161400-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220604161400-5712
	I0604 16:28:02.950659    6672 network_create.go:277] output of [docker network inspect false-20220604161400-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220604161400-5712
	
	** /stderr **
	I0604 16:28:02.958337    6672 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:28:04.052621    6672 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0942718s)
	I0604 16:28:04.070246    6672 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00052a0c0] amended:false}} dirty:map[] misses:0}
	I0604 16:28:04.070246    6672 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:28:04.085914    6672 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00052a0c0] amended:true}} dirty:map[192.168.49.0:0xc00052a0c0 192.168.58.0:0xc00014ea40] misses:0}
	I0604 16:28:04.086354    6672 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:28:04.086354    6672 network_create.go:115] attempt to create docker network false-20220604161400-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:28:04.094156    6672 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220604161400-5712
	W0604 16:28:05.174005    6672 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220604161400-5712 returned with exit code 1
	I0604 16:28:05.174005    6672 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220604161400-5712: (1.0797819s)
	E0604 16:28:05.174005    6672 network_create.go:104] error while trying to create docker network false-20220604161400-5712 192.168.58.0/24: create docker network false-20220604161400-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220604161400-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4dbcffefe7dbb21bfc57c6768dd03652d12f6a6e933aed8e6c15b9aba693e828 (br-4dbcffefe7db): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:28:05.174005    6672 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220604161400-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220604161400-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4dbcffefe7dbb21bfc57c6768dd03652d12f6a6e933aed8e6c15b9aba693e828 (br-4dbcffefe7db): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220604161400-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220604161400-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4dbcffefe7dbb21bfc57c6768dd03652d12f6a6e933aed8e6c15b9aba693e828 (br-4dbcffefe7db): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:28:05.189999    6672 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:28:06.318323    6672 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1283116s)
	I0604 16:28:06.326361    6672 cli_runner.go:164] Run: docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:28:07.445621    6672 cli_runner.go:211] docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:28:07.445703    6672 cli_runner.go:217] Completed: docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: (1.1190313s)
	I0604 16:28:07.445785    6672 client.go:171] LocalClient.Create took 6.6979891s
	I0604 16:28:09.464888    6672 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:28:09.475050    6672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712
	W0604 16:28:10.581507    6672 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712 returned with exit code 1
	I0604 16:28:10.581507    6672 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: (1.1064445s)
	I0604 16:28:10.581507    6672 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:28:10.933187    6672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712
	W0604 16:28:12.042816    6672 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712 returned with exit code 1
	I0604 16:28:12.042816    6672 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: (1.1091006s)
	W0604 16:28:12.042816    6672 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	
	W0604 16:28:12.042816    6672 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:28:12.053584    6672 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:28:12.059479    6672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712
	W0604 16:28:13.176065    6672 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712 returned with exit code 1
	I0604 16:28:13.176065    6672 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: (1.1165738s)
	I0604 16:28:13.176065    6672 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:28:13.422086    6672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712
	W0604 16:28:14.525792    6672 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712 returned with exit code 1
	I0604 16:28:14.525938    6672 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: (1.1036714s)
	W0604 16:28:14.525938    6672 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	
	W0604 16:28:14.525938    6672 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:28:14.525938    6672 start.go:134] duration metric: createHost completed in 13.7815037s
	I0604 16:28:14.536836    6672 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:28:14.544633    6672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712
	W0604 16:28:15.625107    6672 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712 returned with exit code 1
	I0604 16:28:15.625107    6672 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: (1.0804622s)
	I0604 16:28:15.625107    6672 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:28:15.884866    6672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712
	W0604 16:28:16.924664    6672 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712 returned with exit code 1
	I0604 16:28:16.924664    6672 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: (1.0397863s)
	W0604 16:28:16.925117    6672 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	
	W0604 16:28:16.925214    6672 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:28:16.939312    6672 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:28:16.945303    6672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712
	W0604 16:28:17.965475    6672 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712 returned with exit code 1
	I0604 16:28:17.965668    6672 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: (1.0201609s)
	I0604 16:28:17.965743    6672 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:28:18.179630    6672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712
	W0604 16:28:19.289386    6672 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712 returned with exit code 1
	I0604 16:28:19.289386    6672 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: (1.1097436s)
	W0604 16:28:19.289386    6672 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	
	W0604 16:28:19.289386    6672 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220604161400-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220604161400-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220604161400-5712
	I0604 16:28:19.289386    6672 fix.go:57] fixHost completed within 46.7746702s
	I0604 16:28:19.289386    6672 start.go:81] releasing machines lock for "false-20220604161400-5712", held for 46.7749347s
	W0604 16:28:19.289386    6672 out.go:239] * Failed to start docker container. Running "minikube delete -p false-20220604161400-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for false-20220604161400-5712 container: docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/false-20220604161400-5712': mkdir /var/lib/docker/volumes/false-20220604161400-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p false-20220604161400-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for false-20220604161400-5712 container: docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/false-20220604161400-5712': mkdir /var/lib/docker/volumes/false-20220604161400-5712: read-only file system
	
	I0604 16:28:19.295380    6672 out.go:177] 
	W0604 16:28:19.297389    6672 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for false-20220604161400-5712 container: docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/false-20220604161400-5712': mkdir /var/lib/docker/volumes/false-20220604161400-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for false-20220604161400-5712 container: docker volume create false-20220604161400-5712 --label name.minikube.sigs.k8s.io=false-20220604161400-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220604161400-5712: error while creating volume root path '/var/lib/docker/volumes/false-20220604161400-5712': mkdir /var/lib/docker/volumes/false-20220604161400-5712: read-only file system
	
	W0604 16:28:19.297389    6672 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:28:19.297389    6672 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:28:19.300389    6672 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/false/Start (77.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20220604161352-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p bridge-20220604161352-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: exit status 60 (1m17.1316668s)

                                                
                                                
-- stdout --
	* [bridge-20220604161352-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node bridge-20220604161352-5712 in cluster bridge-20220604161352-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "bridge-20220604161352-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:27:11.910306     728 out.go:296] Setting OutFile to fd 1556 ...
	I0604 16:27:11.972961     728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:27:11.972961     728 out.go:309] Setting ErrFile to fd 1600...
	I0604 16:27:11.973011     728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:27:11.986560     728 out.go:303] Setting JSON to false
	I0604 16:27:11.988500     728 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11104,"bootTime":1654348927,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:27:11.989503     728 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:27:11.995043     728 out.go:177] * [bridge-20220604161352-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:27:11.997786     728 notify.go:193] Checking for updates...
	I0604 16:27:12.001639     728 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:27:12.003827     728 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:27:12.007195     728 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:27:12.009470     728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:27:12.012347     728 config.go:178] Loaded profile config "calico-20220604161407-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:12.013479     728 config.go:178] Loaded profile config "false-20220604161400-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:12.013479     728 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:12.014630     728 config.go:178] Loaded profile config "newest-cni-20220604162348-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:12.014630     728 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:27:14.673932     728 docker.go:137] docker version: linux-20.10.16
	I0604 16:27:14.682967     728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:27:16.766835     728 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0838449s)
	I0604 16:27:16.767587     728 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:27:15.7284412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:27:16.771431     728 out.go:177] * Using the docker driver based on user configuration
	I0604 16:27:16.774746     728 start.go:284] selected driver: docker
	I0604 16:27:16.774746     728 start.go:806] validating driver "docker" against <nil>
	I0604 16:27:16.774746     728 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:27:16.842228     728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:27:18.873629     728 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0313786s)
	I0604 16:27:18.873629     728 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:27:17.867762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:27:18.873629     728 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:27:18.874627     728 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:27:18.880621     728 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:27:18.882635     728 cni.go:95] Creating CNI manager for "bridge"
	I0604 16:27:18.882635     728 start_flags.go:301] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0604 16:27:18.882635     728 start_flags.go:306] config:
	{Name:bridge-20220604161352-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:bridge-20220604161352-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:27:18.885620     728 out.go:177] * Starting control plane node bridge-20220604161352-5712 in cluster bridge-20220604161352-5712
	I0604 16:27:18.889638     728 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:27:18.891621     728 out.go:177] * Pulling base image ...
	I0604 16:27:18.894623     728 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:27:18.894623     728 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:27:18.894623     728 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:27:18.894623     728 cache.go:57] Caching tarball of preloaded images
	I0604 16:27:18.895621     728 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:27:18.895621     728 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:27:18.895621     728 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\bridge-20220604161352-5712\config.json ...
	I0604 16:27:18.895621     728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\bridge-20220604161352-5712\config.json: {Name:mkf589cc206fe58eb31e88ff85a7d5c31ab5ca91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:27:19.972806     728 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:27:19.972806     728 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:27:19.973405     728 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:27:19.973405     728 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:27:19.973405     728 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:27:19.973405     728 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:27:19.973405     728 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:27:19.973405     728 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:27:19.973405     728 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:27:22.333470     728 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:27:22.333470     728 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:27:22.333470     728 start.go:352] acquiring machines lock for bridge-20220604161352-5712: {Name:mk5d15b817c3b8b59017aade337983c05636d7c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:27:22.334202     728 start.go:356] acquired machines lock for "bridge-20220604161352-5712" in 679.2µs
	I0604 16:27:22.334307     728 start.go:91] Provisioning new machine with config: &{Name:bridge-20220604161352-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:bridge-20220604161352-5712 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:27:22.334307     728 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:27:22.337744     728 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:27:22.338393     728 start.go:165] libmachine.API.Create for "bridge-20220604161352-5712" (driver="docker")
	I0604 16:27:22.338425     728 client.go:168] LocalClient.Create starting
	I0604 16:27:22.338930     728 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:27:22.339477     728 main.go:134] libmachine: Decoding PEM data...
	I0604 16:27:22.339529     728 main.go:134] libmachine: Parsing certificate...
	I0604 16:27:22.339785     728 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:27:22.339907     728 main.go:134] libmachine: Decoding PEM data...
	I0604 16:27:22.339907     728 main.go:134] libmachine: Parsing certificate...
	I0604 16:27:22.349618     728 cli_runner.go:164] Run: docker network inspect bridge-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:27:23.440676     728 cli_runner.go:211] docker network inspect bridge-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:27:23.440676     728 cli_runner.go:217] Completed: docker network inspect bridge-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0910467s)
	I0604 16:27:23.446681     728 network_create.go:272] running [docker network inspect bridge-20220604161352-5712] to gather additional debugging logs...
	I0604 16:27:23.447687     728 cli_runner.go:164] Run: docker network inspect bridge-20220604161352-5712
	W0604 16:27:24.512407     728 cli_runner.go:211] docker network inspect bridge-20220604161352-5712 returned with exit code 1
	I0604 16:27:24.512407     728 cli_runner.go:217] Completed: docker network inspect bridge-20220604161352-5712: (1.0647083s)
	I0604 16:27:24.512407     728 network_create.go:275] error running [docker network inspect bridge-20220604161352-5712]: docker network inspect bridge-20220604161352-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220604161352-5712
	I0604 16:27:24.512407     728 network_create.go:277] output of [docker network inspect bridge-20220604161352-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220604161352-5712
	
	** /stderr **
	I0604 16:27:24.519384     728 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:27:25.619039     728 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0996429s)
	I0604 16:27:25.641444     728 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00070aac0] misses:0}
	I0604 16:27:25.641444     728 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:27:25.641444     728 network_create.go:115] attempt to create docker network bridge-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:27:25.648096     728 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220604161352-5712
	W0604 16:27:26.751706     728 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220604161352-5712 returned with exit code 1
	I0604 16:27:26.751706     728 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220604161352-5712: (1.1035974s)
	E0604 16:27:26.751706     728 network_create.go:104] error while trying to create docker network bridge-20220604161352-5712 192.168.49.0/24: create docker network bridge-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 689028c232b4f2ec78c40b560b3953531de4aadd673d63be9c0b4a2cd2be1d7d (br-689028c232b4): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:27:26.751706     728 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 689028c232b4f2ec78c40b560b3953531de4aadd673d63be9c0b4a2cd2be1d7d (br-689028c232b4): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 689028c232b4f2ec78c40b560b3953531de4aadd673d63be9c0b4a2cd2be1d7d (br-689028c232b4): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:27:26.774513     728 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:27:27.880161     728 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1056352s)
	I0604 16:27:27.894086     728 cli_runner.go:164] Run: docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:27:28.950296     728 cli_runner.go:211] docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:27:28.950296     728 cli_runner.go:217] Completed: docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0561404s)
	I0604 16:27:28.950296     728 client.go:171] LocalClient.Create took 6.6117973s
	I0604 16:27:30.977684     728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:27:30.983288     728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712
	W0604 16:27:32.021099     728 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712 returned with exit code 1
	I0604 16:27:32.021099     728 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: (1.0365145s)
	I0604 16:27:32.021099     728 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:32.318447     728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712
	W0604 16:27:33.403476     728 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712 returned with exit code 1
	I0604 16:27:33.403644     728 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: (1.0850173s)
	W0604 16:27:33.403644     728 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	
	W0604 16:27:33.403644     728 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:33.415335     728 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:27:33.422340     728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712
	W0604 16:27:34.490393     728 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712 returned with exit code 1
	I0604 16:27:34.490393     728 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: (1.0678957s)
	I0604 16:27:34.490393     728 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:34.796471     728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712
	W0604 16:27:35.852922     728 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712 returned with exit code 1
	I0604 16:27:35.852922     728 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: (1.0564394s)
	W0604 16:27:35.852922     728 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	
	W0604 16:27:35.852922     728 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:35.852922     728 start.go:134] duration metric: createHost completed in 13.5184641s
	I0604 16:27:35.852922     728 start.go:81] releasing machines lock for "bridge-20220604161352-5712", held for 13.5185351s
	W0604 16:27:35.852922     728 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for bridge-20220604161352-5712 container: docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/bridge-20220604161352-5712': mkdir /var/lib/docker/volumes/bridge-20220604161352-5712: read-only file system
	I0604 16:27:35.866918     728 cli_runner.go:164] Run: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}
	W0604 16:27:36.915686     728 cli_runner.go:211] docker container inspect bridge-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:36.915686     728 cli_runner.go:217] Completed: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: (1.0487562s)
	I0604 16:27:36.915686     728 delete.go:82] Unable to get host status for bridge-20220604161352-5712, assuming it has already been deleted: state: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	W0604 16:27:36.915686     728 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for bridge-20220604161352-5712 container: docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/bridge-20220604161352-5712': mkdir /var/lib/docker/volumes/bridge-20220604161352-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for bridge-20220604161352-5712 container: docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/bridge-20220604161352-5712': mkdir /var/lib/docker/volumes/bridge-20220604161352-5712: read-only file system
	
	I0604 16:27:36.915686     728 start.go:614] Will try again in 5 seconds ...
	I0604 16:27:41.918056     728 start.go:352] acquiring machines lock for bridge-20220604161352-5712: {Name:mk5d15b817c3b8b59017aade337983c05636d7c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:27:41.918056     728 start.go:356] acquired machines lock for "bridge-20220604161352-5712" in 0s
	I0604 16:27:41.918713     728 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:27:41.918823     728 fix.go:55] fixHost starting: 
	I0604 16:27:41.933704     728 cli_runner.go:164] Run: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}
	W0604 16:27:43.013269     728 cli_runner.go:211] docker container inspect bridge-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:43.013269     728 cli_runner.go:217] Completed: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: (1.0795528s)
	I0604 16:27:43.013269     728 fix.go:103] recreateIfNeeded on bridge-20220604161352-5712: state= err=unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:43.013269     728 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:27:43.019257     728 out.go:177] * docker "bridge-20220604161352-5712" container is missing, will recreate.
	I0604 16:27:43.022012     728 delete.go:124] DEMOLISHING bridge-20220604161352-5712 ...
	I0604 16:27:43.060229     728 cli_runner.go:164] Run: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}
	W0604 16:27:44.169000     728 cli_runner.go:211] docker container inspect bridge-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:44.169000     728 cli_runner.go:217] Completed: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: (1.1087134s)
	W0604 16:27:44.169000     728 stop.go:75] unable to get state: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:44.169000     728 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:44.186855     728 cli_runner.go:164] Run: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}
	W0604 16:27:45.285462     728 cli_runner.go:211] docker container inspect bridge-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:45.285462     728 cli_runner.go:217] Completed: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: (1.0985942s)
	I0604 16:27:45.285462     728 delete.go:82] Unable to get host status for bridge-20220604161352-5712, assuming it has already been deleted: state: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:45.294114     728 cli_runner.go:164] Run: docker container inspect -f {{.Id}} bridge-20220604161352-5712
	W0604 16:27:46.385126     728 cli_runner.go:211] docker container inspect -f {{.Id}} bridge-20220604161352-5712 returned with exit code 1
	I0604 16:27:46.385126     728 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} bridge-20220604161352-5712: (1.0909997s)
	I0604 16:27:46.385126     728 kic.go:356] could not find the container bridge-20220604161352-5712 to remove it. will try anyways
	I0604 16:27:46.393395     728 cli_runner.go:164] Run: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}
	W0604 16:27:47.457938     728 cli_runner.go:211] docker container inspect bridge-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:47.457938     728 cli_runner.go:217] Completed: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: (1.064425s)
	W0604 16:27:47.458120     728 oci.go:84] error getting container status, will try to delete anyways: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:47.465938     728 cli_runner.go:164] Run: docker exec --privileged -t bridge-20220604161352-5712 /bin/bash -c "sudo init 0"
	W0604 16:27:48.548253     728 cli_runner.go:211] docker exec --privileged -t bridge-20220604161352-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:27:48.548253     728 cli_runner.go:217] Completed: docker exec --privileged -t bridge-20220604161352-5712 /bin/bash -c "sudo init 0": (1.0823032s)
	I0604 16:27:48.548253     728 oci.go:625] error shutdown bridge-20220604161352-5712: docker exec --privileged -t bridge-20220604161352-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:49.557958     728 cli_runner.go:164] Run: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}
	W0604 16:27:50.621544     728 cli_runner.go:211] docker container inspect bridge-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:50.621544     728 cli_runner.go:217] Completed: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: (1.0634465s)
	I0604 16:27:50.621544     728 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:50.621544     728 oci.go:639] temporary error: container bridge-20220604161352-5712 status is  but expect it to be exited
	I0604 16:27:50.621544     728 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:51.100590     728 cli_runner.go:164] Run: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}
	W0604 16:27:52.210096     728 cli_runner.go:211] docker container inspect bridge-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:52.210137     728 cli_runner.go:217] Completed: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: (1.1093757s)
	I0604 16:27:52.210423     728 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:52.210535     728 oci.go:639] temporary error: container bridge-20220604161352-5712 status is  but expect it to be exited
	I0604 16:27:52.210582     728 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:53.116936     728 cli_runner.go:164] Run: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}
	W0604 16:27:54.205171     728 cli_runner.go:211] docker container inspect bridge-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:54.205222     728 cli_runner.go:217] Completed: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: (1.0881594s)
	I0604 16:27:54.205222     728 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:54.205222     728 oci.go:639] temporary error: container bridge-20220604161352-5712 status is  but expect it to be exited
	I0604 16:27:54.205222     728 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:54.857959     728 cli_runner.go:164] Run: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}
	W0604 16:27:55.976250     728 cli_runner.go:211] docker container inspect bridge-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:55.976250     728 cli_runner.go:217] Completed: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: (1.1180629s)
	I0604 16:27:55.976250     728 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:55.976250     728 oci.go:639] temporary error: container bridge-20220604161352-5712 status is  but expect it to be exited
	I0604 16:27:55.976250     728 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:57.098638     728 cli_runner.go:164] Run: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}
	W0604 16:27:58.169498     728 cli_runner.go:211] docker container inspect bridge-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:58.169498     728 cli_runner.go:217] Completed: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: (1.0708483s)
	I0604 16:27:58.169498     728 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:58.169498     728 oci.go:639] temporary error: container bridge-20220604161352-5712 status is  but expect it to be exited
	I0604 16:27:58.169498     728 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:27:59.694618     728 cli_runner.go:164] Run: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:00.723655     728 cli_runner.go:211] docker container inspect bridge-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:00.723655     728 cli_runner.go:217] Completed: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: (1.0289606s)
	I0604 16:28:00.723655     728 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:28:00.723655     728 oci.go:639] temporary error: container bridge-20220604161352-5712 status is  but expect it to be exited
	I0604 16:28:00.723655     728 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:28:03.779372     728 cli_runner.go:164] Run: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:04.894396     728 cli_runner.go:211] docker container inspect bridge-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:04.894396     728 cli_runner.go:217] Completed: docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: (1.1150125s)
	I0604 16:28:04.894396     728 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:28:04.894396     728 oci.go:639] temporary error: container bridge-20220604161352-5712 status is  but expect it to be exited
	I0604 16:28:04.894396     728 oci.go:88] couldn't shut down bridge-20220604161352-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "bridge-20220604161352-5712": docker container inspect bridge-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	 
	I0604 16:28:04.901388     728 cli_runner.go:164] Run: docker rm -f -v bridge-20220604161352-5712
	I0604 16:28:05.989963     728 cli_runner.go:217] Completed: docker rm -f -v bridge-20220604161352-5712: (1.0883505s)
	I0604 16:28:06.003626     728 cli_runner.go:164] Run: docker container inspect -f {{.Id}} bridge-20220604161352-5712
	W0604 16:28:07.146888     728 cli_runner.go:211] docker container inspect -f {{.Id}} bridge-20220604161352-5712 returned with exit code 1
	I0604 16:28:07.146888     728 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} bridge-20220604161352-5712: (1.1432494s)
	I0604 16:28:07.153889     728 cli_runner.go:164] Run: docker network inspect bridge-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:28:08.212659     728 cli_runner.go:211] docker network inspect bridge-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:28:08.212659     728 cli_runner.go:217] Completed: docker network inspect bridge-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0586082s)
	I0604 16:28:08.220655     728 network_create.go:272] running [docker network inspect bridge-20220604161352-5712] to gather additional debugging logs...
	I0604 16:28:08.220655     728 cli_runner.go:164] Run: docker network inspect bridge-20220604161352-5712
	W0604 16:28:09.283983     728 cli_runner.go:211] docker network inspect bridge-20220604161352-5712 returned with exit code 1
	I0604 16:28:09.283983     728 cli_runner.go:217] Completed: docker network inspect bridge-20220604161352-5712: (1.063049s)
	I0604 16:28:09.283983     728 network_create.go:275] error running [docker network inspect bridge-20220604161352-5712]: docker network inspect bridge-20220604161352-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220604161352-5712
	I0604 16:28:09.283983     728 network_create.go:277] output of [docker network inspect bridge-20220604161352-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220604161352-5712
	
	** /stderr **
	W0604 16:28:09.285228     728 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:28:09.285298     728 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:28:10.285766     728 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:28:10.289904     728 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:28:10.290112     728 start.go:165] libmachine.API.Create for "bridge-20220604161352-5712" (driver="docker")
	I0604 16:28:10.290112     728 client.go:168] LocalClient.Create starting
	I0604 16:28:10.290733     728 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:28:10.291010     728 main.go:134] libmachine: Decoding PEM data...
	I0604 16:28:10.291010     728 main.go:134] libmachine: Parsing certificate...
	I0604 16:28:10.291010     728 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:28:10.291010     728 main.go:134] libmachine: Decoding PEM data...
	I0604 16:28:10.291010     728 main.go:134] libmachine: Parsing certificate...
	I0604 16:28:10.300956     728 cli_runner.go:164] Run: docker network inspect bridge-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:28:11.435339     728 cli_runner.go:211] docker network inspect bridge-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:28:11.435339     728 cli_runner.go:217] Completed: docker network inspect bridge-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.13437s)
	I0604 16:28:11.443005     728 network_create.go:272] running [docker network inspect bridge-20220604161352-5712] to gather additional debugging logs...
	I0604 16:28:11.443005     728 cli_runner.go:164] Run: docker network inspect bridge-20220604161352-5712
	W0604 16:28:12.516523     728 cli_runner.go:211] docker network inspect bridge-20220604161352-5712 returned with exit code 1
	I0604 16:28:12.516563     728 cli_runner.go:217] Completed: docker network inspect bridge-20220604161352-5712: (1.0732988s)
	I0604 16:28:12.516640     728 network_create.go:275] error running [docker network inspect bridge-20220604161352-5712]: docker network inspect bridge-20220604161352-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220604161352-5712
	I0604 16:28:12.516640     728 network_create.go:277] output of [docker network inspect bridge-20220604161352-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220604161352-5712
	
	** /stderr **
	I0604 16:28:12.527878     728 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:28:13.627973     728 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.099842s)
	I0604 16:28:13.648552     728 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00070aac0] amended:false}} dirty:map[] misses:0}
	I0604 16:28:13.648552     728 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:28:13.664972     728 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00070aac0] amended:true}} dirty:map[192.168.49.0:0xc00070aac0 192.168.58.0:0xc0007884f0] misses:0}
	I0604 16:28:13.664972     728 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:28:13.664972     728 network_create.go:115] attempt to create docker network bridge-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:28:13.672278     728 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220604161352-5712
	W0604 16:28:14.773814     728 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220604161352-5712 returned with exit code 1
	I0604 16:28:14.773814     728 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220604161352-5712: (1.1015236s)
	E0604 16:28:14.773814     728 network_create.go:104] error while trying to create docker network bridge-20220604161352-5712 192.168.58.0/24: create docker network bridge-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c5d43f51475df56d3412320f63620b07404bb947798f9e52990b019d50123d32 (br-c5d43f51475d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:28:14.773814     728 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c5d43f51475df56d3412320f63620b07404bb947798f9e52990b019d50123d32 (br-c5d43f51475d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c5d43f51475df56d3412320f63620b07404bb947798f9e52990b019d50123d32 (br-c5d43f51475d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:28:14.788556     728 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:28:15.861885     728 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0733172s)
	I0604 16:28:15.867861     728 cli_runner.go:164] Run: docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:28:16.940298     728 cli_runner.go:211] docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:28:16.940298     728 cli_runner.go:217] Completed: docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0714257s)
	I0604 16:28:16.940298     728 client.go:171] LocalClient.Create took 6.6501121s
	I0604 16:28:18.960097     728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:28:18.966520     728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712
	W0604 16:28:20.062419     728 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712 returned with exit code 1
	I0604 16:28:20.062489     728 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: (1.0957677s)
	I0604 16:28:20.062489     728 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:28:20.410561     728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712
	W0604 16:28:21.472031     728 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712 returned with exit code 1
	I0604 16:28:21.472031     728 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: (1.0614263s)
	W0604 16:28:21.472503     728 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	
	W0604 16:28:21.472567     728 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:28:21.485549     728 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:28:21.492334     728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712
	W0604 16:28:22.546788     728 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712 returned with exit code 1
	I0604 16:28:22.546788     728 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: (1.0544428s)
	I0604 16:28:22.546788     728 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:28:22.788652     728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712
	W0604 16:28:23.880984     728 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712 returned with exit code 1
	I0604 16:28:23.881231     728 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: (1.0923197s)
	W0604 16:28:23.881285     728 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	
	W0604 16:28:23.881285     728 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:28:23.881285     728 start.go:134] duration metric: createHost completed in 13.5951939s
	I0604 16:28:23.891843     728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:28:23.898702     728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712
	W0604 16:28:25.015410     728 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712 returned with exit code 1
	I0604 16:28:25.015410     728 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: (1.1166954s)
	I0604 16:28:25.015410     728 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:28:25.279677     728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712
	W0604 16:28:26.366302     728 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712 returned with exit code 1
	I0604 16:28:26.366302     728 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: (1.0866129s)
	W0604 16:28:26.366302     728 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	
	W0604 16:28:26.366302     728 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:28:26.376290     728 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:28:26.383300     728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712
	W0604 16:28:27.461070     728 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712 returned with exit code 1
	I0604 16:28:27.461070     728 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: (1.0777578s)
	I0604 16:28:27.461070     728 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:28:27.675308     728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712
	W0604 16:28:28.770237     728 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712 returned with exit code 1
	I0604 16:28:28.770237     728 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: (1.0949164s)
	W0604 16:28:28.770237     728 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	
	W0604 16:28:28.770237     728 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220604161352-5712
	I0604 16:28:28.770237     728 fix.go:57] fixHost completed within 46.8508938s
	I0604 16:28:28.770237     728 start.go:81] releasing machines lock for "bridge-20220604161352-5712", held for 46.8516603s
	W0604 16:28:28.770237     728 out.go:239] * Failed to start docker container. Running "minikube delete -p bridge-20220604161352-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220604161352-5712 container: docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/bridge-20220604161352-5712': mkdir /var/lib/docker/volumes/bridge-20220604161352-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p bridge-20220604161352-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220604161352-5712 container: docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/bridge-20220604161352-5712': mkdir /var/lib/docker/volumes/bridge-20220604161352-5712: read-only file system
	
	I0604 16:28:28.775228     728 out.go:177] 
	W0604 16:28:28.777225     728 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220604161352-5712 container: docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/bridge-20220604161352-5712': mkdir /var/lib/docker/volumes/bridge-20220604161352-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220604161352-5712 container: docker volume create bridge-20220604161352-5712 --label name.minikube.sigs.k8s.io=bridge-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/bridge-20220604161352-5712': mkdir /var/lib/docker/volumes/bridge-20220604161352-5712: read-only file system
	
	W0604 16:28:28.777225     728 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:28:28.777225     728 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:28:28.780231     728 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/bridge/Start (77.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20220604161352-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p enable-default-cni-20220604161352-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: exit status 60 (1m16.9972908s)

                                                
                                                
-- stdout --
	* [enable-default-cni-20220604161352-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node enable-default-cni-20220604161352-5712 in cluster enable-default-cni-20220604161352-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "enable-default-cni-20220604161352-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:27:30.259609    1348 out.go:296] Setting OutFile to fd 1908 ...
	I0604 16:27:30.315501    1348 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:27:30.315501    1348 out.go:309] Setting ErrFile to fd 1548...
	I0604 16:27:30.315501    1348 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:27:30.330026    1348 out.go:303] Setting JSON to false
	I0604 16:27:30.331830    1348 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11122,"bootTime":1654348928,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:27:30.331830    1348 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:27:30.336977    1348 out.go:177] * [enable-default-cni-20220604161352-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:27:30.340468    1348 notify.go:193] Checking for updates...
	I0604 16:27:30.342457    1348 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:27:30.344832    1348 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:27:30.347260    1348 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:27:30.349583    1348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:27:30.352500    1348 config.go:178] Loaded profile config "bridge-20220604161352-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:30.353236    1348 config.go:178] Loaded profile config "false-20220604161400-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:30.353292    1348 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:30.353843    1348 config.go:178] Loaded profile config "newest-cni-20220604162348-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:30.353940    1348 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:27:32.983535    1348 docker.go:137] docker version: linux-20.10.16
	I0604 16:27:32.990839    1348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:27:35.048923    1348 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0580613s)
	I0604 16:27:35.049804    1348 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:27:34.0208392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:27:35.052904    1348 out.go:177] * Using the docker driver based on user configuration
	I0604 16:27:35.056171    1348 start.go:284] selected driver: docker
	I0604 16:27:35.056171    1348 start.go:806] validating driver "docker" against <nil>
	I0604 16:27:35.056171    1348 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:27:35.122160    1348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:27:37.208819    1348 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0865687s)
	I0604 16:27:37.208963    1348 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:27:36.1732185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:27:37.208963    1348 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	E0604 16:27:37.209968    1348 start_flags.go:444] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0604 16:27:37.210043    1348 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:27:37.212959    1348 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:27:37.217013    1348 cni.go:95] Creating CNI manager for "bridge"
	I0604 16:27:37.217013    1348 start_flags.go:301] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0604 16:27:37.217013    1348 start_flags.go:306] config:
	{Name:enable-default-cni-20220604161352-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:enable-default-cni-20220604161352-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:27:37.221663    1348 out.go:177] * Starting control plane node enable-default-cni-20220604161352-5712 in cluster enable-default-cni-20220604161352-5712
	I0604 16:27:37.223773    1348 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:27:37.225259    1348 out.go:177] * Pulling base image ...
	I0604 16:27:37.228315    1348 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:27:37.228315    1348 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:27:37.228315    1348 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:27:37.228315    1348 cache.go:57] Caching tarball of preloaded images
	I0604 16:27:37.229427    1348 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:27:37.229427    1348 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:27:37.230138    1348 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\enable-default-cni-20220604161352-5712\config.json ...
	I0604 16:27:37.230338    1348 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\enable-default-cni-20220604161352-5712\config.json: {Name:mk39da260d1ba0f30027f49f3aa3323421533b7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:27:38.309386    1348 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:27:38.309386    1348 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:27:38.309386    1348 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:27:38.309386    1348 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:27:38.309386    1348 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:27:38.309386    1348 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:27:38.309386    1348 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:27:38.309386    1348 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:27:38.309386    1348 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:27:40.689984    1348 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:27:40.689984    1348 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:27:40.690500    1348 start.go:352] acquiring machines lock for enable-default-cni-20220604161352-5712: {Name:mk2c1fab6b7ebc78016b71e88363467144786b4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:27:40.690741    1348 start.go:356] acquired machines lock for "enable-default-cni-20220604161352-5712" in 91µs
	I0604 16:27:40.690741    1348 start.go:91] Provisioning new machine with config: &{Name:enable-default-cni-20220604161352-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:enable-default-cni-20220604161352
-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:27:40.691274    1348 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:27:40.696486    1348 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:27:40.696486    1348 start.go:165] libmachine.API.Create for "enable-default-cni-20220604161352-5712" (driver="docker")
	I0604 16:27:40.697024    1348 client.go:168] LocalClient.Create starting
	I0604 16:27:40.697266    1348 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:27:40.698031    1348 main.go:134] libmachine: Decoding PEM data...
	I0604 16:27:40.698031    1348 main.go:134] libmachine: Parsing certificate...
	I0604 16:27:40.698031    1348 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:27:40.698031    1348 main.go:134] libmachine: Decoding PEM data...
	I0604 16:27:40.698031    1348 main.go:134] libmachine: Parsing certificate...
	I0604 16:27:40.707809    1348 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:27:41.770266    1348 cli_runner.go:211] docker network inspect enable-default-cni-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:27:41.770380    1348 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0623387s)
	I0604 16:27:41.778116    1348 network_create.go:272] running [docker network inspect enable-default-cni-20220604161352-5712] to gather additional debugging logs...
	I0604 16:27:41.778116    1348 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220604161352-5712
	W0604 16:27:42.859527    1348 cli_runner.go:211] docker network inspect enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:27:42.859527    1348 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220604161352-5712: (1.0813989s)
	I0604 16:27:42.859527    1348 network_create.go:275] error running [docker network inspect enable-default-cni-20220604161352-5712]: docker network inspect enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220604161352-5712
	I0604 16:27:42.859527    1348 network_create.go:277] output of [docker network inspect enable-default-cni-20220604161352-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220604161352-5712
	
	** /stderr **
	I0604 16:27:42.866465    1348 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:27:43.977629    1348 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1111516s)
	I0604 16:27:44.000824    1348 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00070e250] misses:0}
	I0604 16:27:44.001497    1348 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:27:44.001497    1348 network_create.go:115] attempt to create docker network enable-default-cni-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:27:44.015235    1348 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220604161352-5712
	W0604 16:27:45.095622    1348 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:27:45.095622    1348 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220604161352-5712: (1.0803749s)
	E0604 16:27:45.095622    1348 network_create.go:104] error while trying to create docker network enable-default-cni-20220604161352-5712 192.168.49.0/24: create docker network enable-default-cni-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fe553ca9baa3981f66ba1280024c37d45de7912c9ddd297361edd33912dd11ed (br-fe553ca9baa3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:27:45.095622    1348 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fe553ca9baa3981f66ba1280024c37d45de7912c9ddd297361edd33912dd11ed (br-fe553ca9baa3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fe553ca9baa3981f66ba1280024c37d45de7912c9ddd297361edd33912dd11ed (br-fe553ca9baa3): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:27:45.109628    1348 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:27:46.170296    1348 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0605545s)
	I0604 16:27:46.178671    1348 cli_runner.go:164] Run: docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:27:47.255252    1348 cli_runner.go:211] docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:27:47.255252    1348 cli_runner.go:217] Completed: docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0764309s)
	I0604 16:27:47.255252    1348 client.go:171] LocalClient.Create took 6.5579139s
	I0604 16:27:49.279677    1348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:27:49.286066    1348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712
	W0604 16:27:50.347934    1348 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:27:50.347934    1348 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: (1.0616611s)
	I0604 16:27:50.347934    1348 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:27:50.645131    1348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712
	W0604 16:27:51.722810    1348 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:27:51.722810    1348 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: (1.0776678s)
	W0604 16:27:51.722810    1348 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	
	W0604 16:27:51.722810    1348 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:27:51.733850    1348 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:27:51.740911    1348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712
	W0604 16:27:52.851707    1348 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:27:52.851791    1348 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: (1.1106129s)
	I0604 16:27:52.852031    1348 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:27:53.161297    1348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712
	W0604 16:27:54.220731    1348 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:27:54.220731    1348 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: (1.0592502s)
	W0604 16:27:54.220896    1348 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	
	W0604 16:27:54.220977    1348 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:27:54.220977    1348 start.go:134] duration metric: createHost completed in 13.529553s
	I0604 16:27:54.220977    1348 start.go:81] releasing machines lock for "enable-default-cni-20220604161352-5712", held for 13.5300865s
	W0604 16:27:54.221238    1348 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220604161352-5712 container: docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220604161352-5712': mkdir /var/lib/docker/volumes/enable-default-cni-20220604161352-5712: read-only file system
	I0604 16:27:54.237016    1348 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}
	W0604 16:27:55.304614    1348 cli_runner.go:211] docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:55.304614    1348 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: (1.0675855s)
	I0604 16:27:55.304614    1348 delete.go:82] Unable to get host status for enable-default-cni-20220604161352-5712, assuming it has already been deleted: state: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	W0604 16:27:55.304614    1348 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220604161352-5712 container: docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220604161352-5712': mkdir /var/lib/docker/volumes/enable-default-cni-20220604161352-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220604161352-5712 container: docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220604161352-5712': mkdir /var/lib/docker/volumes/enable-default-cni-20220604161352-5712: read-only file system
	
	I0604 16:27:55.304614    1348 start.go:614] Will try again in 5 seconds ...
	I0604 16:28:00.314391    1348 start.go:352] acquiring machines lock for enable-default-cni-20220604161352-5712: {Name:mk2c1fab6b7ebc78016b71e88363467144786b4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:28:00.314391    1348 start.go:356] acquired machines lock for "enable-default-cni-20220604161352-5712" in 0s
	I0604 16:28:00.314391    1348 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:28:00.314391    1348 fix.go:55] fixHost starting: 
	I0604 16:28:00.333998    1348 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:01.401939    1348 cli_runner.go:211] docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:01.401939    1348 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: (1.066921s)
	I0604 16:28:01.401939    1348 fix.go:103] recreateIfNeeded on enable-default-cni-20220604161352-5712: state= err=unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:01.401939    1348 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:28:01.405997    1348 out.go:177] * docker "enable-default-cni-20220604161352-5712" container is missing, will recreate.
	I0604 16:28:01.407898    1348 delete.go:124] DEMOLISHING enable-default-cni-20220604161352-5712 ...
	I0604 16:28:01.421925    1348 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:02.503108    1348 cli_runner.go:211] docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:02.503108    1348 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: (1.0811708s)
	W0604 16:28:02.503108    1348 stop.go:75] unable to get state: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:02.503108    1348 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:02.517080    1348 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:03.595912    1348 cli_runner.go:211] docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:03.595912    1348 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: (1.0788201s)
	I0604 16:28:03.595912    1348 delete.go:82] Unable to get host status for enable-default-cni-20220604161352-5712, assuming it has already been deleted: state: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:03.602904    1348 cli_runner.go:164] Run: docker container inspect -f {{.Id}} enable-default-cni-20220604161352-5712
	W0604 16:28:04.709334    1348 cli_runner.go:211] docker container inspect -f {{.Id}} enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:04.709334    1348 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} enable-default-cni-20220604161352-5712: (1.1064177s)
	I0604 16:28:04.709334    1348 kic.go:356] could not find the container enable-default-cni-20220604161352-5712 to remove it. will try anyways
	I0604 16:28:04.716311    1348 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:05.820288    1348 cli_runner.go:211] docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:05.820629    1348 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: (1.1039647s)
	W0604 16:28:05.820674    1348 oci.go:84] error getting container status, will try to delete anyways: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:05.828348    1348 cli_runner.go:164] Run: docker exec --privileged -t enable-default-cni-20220604161352-5712 /bin/bash -c "sudo init 0"
	W0604 16:28:06.943409    1348 cli_runner.go:211] docker exec --privileged -t enable-default-cni-20220604161352-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:28:06.943499    1348 cli_runner.go:217] Completed: docker exec --privileged -t enable-default-cni-20220604161352-5712 /bin/bash -c "sudo init 0": (1.1150485s)
	I0604 16:28:06.943531    1348 oci.go:625] error shutdown enable-default-cni-20220604161352-5712: docker exec --privileged -t enable-default-cni-20220604161352-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:07.955692    1348 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:09.035126    1348 cli_runner.go:211] docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:09.035126    1348 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: (1.0794225s)
	I0604 16:28:09.035126    1348 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:09.035126    1348 oci.go:639] temporary error: container enable-default-cni-20220604161352-5712 status is  but expect it to be exited
	I0604 16:28:09.035126    1348 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:09.509469    1348 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:10.596922    1348 cli_runner.go:211] docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:10.597146    1348 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: (1.0872955s)
	I0604 16:28:10.597200    1348 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:10.597200    1348 oci.go:639] temporary error: container enable-default-cni-20220604161352-5712 status is  but expect it to be exited
	I0604 16:28:10.597200    1348 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:11.507042    1348 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:12.593660    1348 cli_runner.go:211] docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:12.593660    1348 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: (1.0865607s)
	I0604 16:28:12.593660    1348 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:12.593660    1348 oci.go:639] temporary error: container enable-default-cni-20220604161352-5712 status is  but expect it to be exited
	I0604 16:28:12.593660    1348 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:13.248040    1348 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:14.337753    1348 cli_runner.go:211] docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:14.337753    1348 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: (1.0891826s)
	I0604 16:28:14.337753    1348 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:14.337753    1348 oci.go:639] temporary error: container enable-default-cni-20220604161352-5712 status is  but expect it to be exited
	I0604 16:28:14.337753    1348 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:15.460745    1348 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:16.561554    1348 cli_runner.go:211] docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:16.561554    1348 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: (1.1006501s)
	I0604 16:28:16.561554    1348 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:16.561554    1348 oci.go:639] temporary error: container enable-default-cni-20220604161352-5712 status is  but expect it to be exited
	I0604 16:28:16.561554    1348 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:18.085736    1348 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:19.178406    1348 cli_runner.go:211] docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:19.178406    1348 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: (1.0926571s)
	I0604 16:28:19.178406    1348 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:19.178406    1348 oci.go:639] temporary error: container enable-default-cni-20220604161352-5712 status is  but expect it to be exited
	I0604 16:28:19.178406    1348 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:22.242261    1348 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:23.330027    1348 cli_runner.go:211] docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:23.330027    1348 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: (1.0877538s)
	I0604 16:28:23.330027    1348 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:23.330027    1348 oci.go:639] temporary error: container enable-default-cni-20220604161352-5712 status is  but expect it to be exited
	I0604 16:28:23.330027    1348 oci.go:88] couldn't shut down enable-default-cni-20220604161352-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220604161352-5712": docker container inspect enable-default-cni-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	 
	I0604 16:28:23.337029    1348 cli_runner.go:164] Run: docker rm -f -v enable-default-cni-20220604161352-5712
	I0604 16:28:24.450826    1348 cli_runner.go:217] Completed: docker rm -f -v enable-default-cni-20220604161352-5712: (1.1137851s)
	I0604 16:28:24.457841    1348 cli_runner.go:164] Run: docker container inspect -f {{.Id}} enable-default-cni-20220604161352-5712
	W0604 16:28:25.520327    1348 cli_runner.go:211] docker container inspect -f {{.Id}} enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:25.520327    1348 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} enable-default-cni-20220604161352-5712: (1.0623259s)
	I0604 16:28:25.528724    1348 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:28:26.619682    1348 cli_runner.go:211] docker network inspect enable-default-cni-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:28:26.619682    1348 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0909459s)
	I0604 16:28:26.626682    1348 network_create.go:272] running [docker network inspect enable-default-cni-20220604161352-5712] to gather additional debugging logs...
	I0604 16:28:26.626682    1348 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220604161352-5712
	W0604 16:28:27.711749    1348 cli_runner.go:211] docker network inspect enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:27.711749    1348 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220604161352-5712: (1.0850555s)
	I0604 16:28:27.711749    1348 network_create.go:275] error running [docker network inspect enable-default-cni-20220604161352-5712]: docker network inspect enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220604161352-5712
	I0604 16:28:27.711749    1348 network_create.go:277] output of [docker network inspect enable-default-cni-20220604161352-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220604161352-5712
	
	** /stderr **
	W0604 16:28:27.712754    1348 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:28:27.712754    1348 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:28:28.722230    1348 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:28:28.727261    1348 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:28:28.727261    1348 start.go:165] libmachine.API.Create for "enable-default-cni-20220604161352-5712" (driver="docker")
	I0604 16:28:28.727261    1348 client.go:168] LocalClient.Create starting
	I0604 16:28:28.728232    1348 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:28:28.728232    1348 main.go:134] libmachine: Decoding PEM data...
	I0604 16:28:28.728232    1348 main.go:134] libmachine: Parsing certificate...
	I0604 16:28:28.728232    1348 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:28:28.729235    1348 main.go:134] libmachine: Decoding PEM data...
	I0604 16:28:28.729235    1348 main.go:134] libmachine: Parsing certificate...
	I0604 16:28:28.737230    1348 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:28:29.825200    1348 cli_runner.go:211] docker network inspect enable-default-cni-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:28:29.825200    1348 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0879576s)
	I0604 16:28:29.832051    1348 network_create.go:272] running [docker network inspect enable-default-cni-20220604161352-5712] to gather additional debugging logs...
	I0604 16:28:29.832051    1348 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220604161352-5712
	W0604 16:28:30.939983    1348 cli_runner.go:211] docker network inspect enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:30.939983    1348 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220604161352-5712: (1.1079199s)
	I0604 16:28:30.939983    1348 network_create.go:275] error running [docker network inspect enable-default-cni-20220604161352-5712]: docker network inspect enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220604161352-5712
	I0604 16:28:30.939983    1348 network_create.go:277] output of [docker network inspect enable-default-cni-20220604161352-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220604161352-5712
	
	** /stderr **
	I0604 16:28:30.946983    1348 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:28:32.033597    1348 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0866018s)
	I0604 16:28:32.051459    1348 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00070e250] amended:false}} dirty:map[] misses:0}
	I0604 16:28:32.051510    1348 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:28:32.068023    1348 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00070e250] amended:true}} dirty:map[192.168.49.0:0xc00070e250 192.168.58.0:0xc000670318] misses:0}
	I0604 16:28:32.068023    1348 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:28:32.068023    1348 network_create.go:115] attempt to create docker network enable-default-cni-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:28:32.077854    1348 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220604161352-5712
	W0604 16:28:33.142913    1348 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:33.142975    1348 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220604161352-5712: (1.0650472s)
	E0604 16:28:33.143051    1348 network_create.go:104] error while trying to create docker network enable-default-cni-20220604161352-5712 192.168.58.0/24: create docker network enable-default-cni-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1a1654bdd35d9dd6a6e068c8fa37e8054d91434d537f4a62d57c2bdd61c54754 (br-1a1654bdd35d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:28:33.143230    1348 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1a1654bdd35d9dd6a6e068c8fa37e8054d91434d537f4a62d57c2bdd61c54754 (br-1a1654bdd35d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1a1654bdd35d9dd6a6e068c8fa37e8054d91434d537f4a62d57c2bdd61c54754 (br-1a1654bdd35d): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:28:33.160618    1348 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:28:34.234427    1348 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.073626s)
	I0604 16:28:34.242356    1348 cli_runner.go:164] Run: docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:28:35.340789    1348 cli_runner.go:211] docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:28:35.341379    1348 cli_runner.go:217] Completed: docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0974404s)
	I0604 16:28:35.341459    1348 client.go:171] LocalClient.Create took 6.6141247s
	I0604 16:28:37.364076    1348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:28:37.371171    1348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712
	W0604 16:28:38.478439    1348 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:38.478439    1348 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: (1.1070971s)
	I0604 16:28:38.478439    1348 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:38.829191    1348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712
	W0604 16:28:39.935583    1348 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:39.935583    1348 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: (1.1063795s)
	W0604 16:28:39.935583    1348 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	
	W0604 16:28:39.935583    1348 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:39.946644    1348 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:28:39.950079    1348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712
	W0604 16:28:40.988969    1348 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:40.989101    1348 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: (1.0387258s)
	I0604 16:28:40.989101    1348 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:41.241682    1348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712
	W0604 16:28:42.267322    1348 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:42.267322    1348 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: (1.0254467s)
	W0604 16:28:42.267322    1348 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	
	W0604 16:28:42.267322    1348 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:42.267322    1348 start.go:134] duration metric: createHost completed in 13.5449421s
	I0604 16:28:42.277616    1348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:28:42.285004    1348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712
	W0604 16:28:43.314089    1348 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:43.314263    1348 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: (1.0290434s)
	I0604 16:28:43.314263    1348 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:43.575220    1348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712
	W0604 16:28:44.658480    1348 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:44.658480    1348 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: (1.0829762s)
	W0604 16:28:44.658480    1348 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	
	W0604 16:28:44.658480    1348 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:44.668962    1348 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:28:44.674834    1348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712
	W0604 16:28:45.722134    1348 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:45.722134    1348 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: (1.0472887s)
	I0604 16:28:45.722134    1348 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:45.936432    1348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712
	W0604 16:28:46.975284    1348 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712 returned with exit code 1
	I0604 16:28:46.975284    1348 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: (1.0388405s)
	W0604 16:28:46.975284    1348 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	
	W0604 16:28:46.975284    1348 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220604161352-5712
	I0604 16:28:46.975284    1348 fix.go:57] fixHost completed within 46.6603752s
	I0604 16:28:46.975284    1348 start.go:81] releasing machines lock for "enable-default-cni-20220604161352-5712", held for 46.6603752s
	W0604 16:28:46.976156    1348 out.go:239] * Failed to start docker container. Running "minikube delete -p enable-default-cni-20220604161352-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220604161352-5712 container: docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220604161352-5712': mkdir /var/lib/docker/volumes/enable-default-cni-20220604161352-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p enable-default-cni-20220604161352-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220604161352-5712 container: docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220604161352-5712': mkdir /var/lib/docker/volumes/enable-default-cni-20220604161352-5712: read-only file system
	
	I0604 16:28:46.980165    1348 out.go:177] 
	W0604 16:28:46.983156    1348 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220604161352-5712 container: docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220604161352-5712': mkdir /var/lib/docker/volumes/enable-default-cni-20220604161352-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220604161352-5712 container: docker volume create enable-default-cni-20220604161352-5712 --label name.minikube.sigs.k8s.io=enable-default-cni-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220604161352-5712': mkdir /var/lib/docker/volumes/enable-default-cni-20220604161352-5712: read-only file system
	
	W0604 16:28:46.983156    1348 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:28:46.983156    1348 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:28:46.986161    1348 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (77.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (7.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20220604162348-5712 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p newest-cni-20220604162348-5712 "sudo crictl images -o json": exit status 80 (3.1401888s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_6.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p newest-cni-20220604162348-5712 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220604162348-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220604162348-5712: exit status 1 (1.1877299s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712: exit status 7 (2.9392363s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:27:55.163844    6536 status.go:247] status error: host: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220604162348-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (7.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20220604162348-5712 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p newest-cni-20220604162348-5712 --alsologtostderr -v=1: exit status 80 (3.192609s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:27:55.457081    8228 out.go:296] Setting OutFile to fd 1672 ...
	I0604 16:27:55.521987    8228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:27:55.521987    8228 out.go:309] Setting ErrFile to fd 1724...
	I0604 16:27:55.521987    8228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:27:55.533208    8228 out.go:303] Setting JSON to false
	I0604 16:27:55.533208    8228 mustload.go:65] Loading cluster: newest-cni-20220604162348-5712
	I0604 16:27:55.534478    8228 config.go:178] Loaded profile config "newest-cni-20220604162348-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:27:55.549214    8228 cli_runner.go:164] Run: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}
	W0604 16:27:58.076182    8228 cli_runner.go:211] docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:27:58.076182    8228 cli_runner.go:217] Completed: docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: (2.525935s)
	I0604 16:27:58.079684    8228 out.go:177] 
	W0604 16:27:58.082114    8228 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712
	
	W0604 16:27:58.082114    8228 out.go:239] * 
	* 
	W0604 16:27:58.354562    8228 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_12.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_12.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 16:27:58.357554    8228 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p newest-cni-20220604162348-5712 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220604162348-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220604162348-5712: exit status 1 (1.0991215s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712: exit status 7 (2.960844s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:28:02.426989    4744 status.go:247] status error: host: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220604162348-5712" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220604162348-5712
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220604162348-5712: exit status 1 (1.1597093s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220604162348-5712 -n newest-cni-20220604162348-5712: exit status 7 (3.0973484s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 16:28:06.693655    6212 status.go:247] status error: host: state: unknown state "newest-cni-20220604162348-5712": docker container inspect newest-cni-20220604162348-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220604162348-5712

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220604162348-5712" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (11.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (75.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20220604161352-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubenet-20220604161352-5712 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: exit status 60 (1m15.1078098s)

                                                
                                                
-- stdout --
	* [kubenet-20220604161352-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubenet-20220604161352-5712 in cluster kubenet-20220604161352-5712
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kubenet-20220604161352-5712" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 16:28:22.945870    3756 out.go:296] Setting OutFile to fd 1560 ...
	I0604 16:28:23.000900    3756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:28:23.000900    3756 out.go:309] Setting ErrFile to fd 1584...
	I0604 16:28:23.000900    3756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 16:28:23.023128    3756 out.go:303] Setting JSON to false
	I0604 16:28:23.025763    3756 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11175,"bootTime":1654348928,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 16:28:23.025763    3756 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 16:28:23.030281    3756 out.go:177] * [kubenet-20220604161352-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 16:28:23.033517    3756 notify.go:193] Checking for updates...
	I0604 16:28:23.036550    3756 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 16:28:23.038945    3756 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 16:28:23.040856    3756 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 16:28:23.043211    3756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 16:28:23.046196    3756 config.go:178] Loaded profile config "bridge-20220604161352-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:28:23.046777    3756 config.go:178] Loaded profile config "enable-default-cni-20220604161352-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:28:23.046777    3756 config.go:178] Loaded profile config "false-20220604161400-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:28:23.047549    3756 config.go:178] Loaded profile config "multinode-20220604155719-5712-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 16:28:23.047549    3756 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 16:28:25.735048    3756 docker.go:137] docker version: linux-20.10.16
	I0604 16:28:25.740529    3756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:28:27.823095    3756 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0825428s)
	I0604 16:28:27.823815    3756 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:28:26.7918343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:28:27.846148    3756 out.go:177] * Using the docker driver based on user configuration
	I0604 16:28:27.848142    3756 start.go:284] selected driver: docker
	I0604 16:28:27.848142    3756 start.go:806] validating driver "docker" against <nil>
	I0604 16:28:27.848142    3756 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 16:28:27.923225    3756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 16:28:29.996590    3756 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0732416s)
	I0604 16:28:29.996648    3756 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-04 16:28:28.9713423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 16:28:29.997305    3756 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 16:28:29.997871    3756 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 16:28:30.003704    3756 out.go:177] * Using Docker Desktop driver with the root privilege
	I0604 16:28:30.005758    3756 cni.go:91] network plugin configured as "kubenet", returning disabled
	I0604 16:28:30.005758    3756 start_flags.go:306] config:
	{Name:kubenet-20220604161352-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubenet-20220604161352-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 16:28:30.010104    3756 out.go:177] * Starting control plane node kubenet-20220604161352-5712 in cluster kubenet-20220604161352-5712
	I0604 16:28:30.019043    3756 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 16:28:30.021473    3756 out.go:177] * Pulling base image ...
	I0604 16:28:30.024830    3756 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 16:28:30.024830    3756 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 16:28:30.024830    3756 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 16:28:30.024830    3756 cache.go:57] Caching tarball of preloaded images
	I0604 16:28:30.025973    3756 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 16:28:30.026228    3756 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 16:28:30.026884    3756 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubenet-20220604161352-5712\config.json ...
	I0604 16:28:30.027638    3756 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubenet-20220604161352-5712\config.json: {Name:mk606c1696bc9afbc5ee743cea3b8a095b824530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 16:28:31.129071    3756 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 16:28:31.129071    3756 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:28:31.129071    3756 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:28:31.129071    3756 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 16:28:31.129596    3756 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 16:28:31.129596    3756 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 16:28:31.129767    3756 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 16:28:31.129767    3756 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from local cache
	I0604 16:28:31.129767    3756 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 16:28:33.427209    3756 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 from cached tarball
	I0604 16:28:33.427209    3756 cache.go:206] Successfully downloaded all kic artifacts
	I0604 16:28:33.427209    3756 start.go:352] acquiring machines lock for kubenet-20220604161352-5712: {Name:mkef10e063497f59af9eb2f27f8d242635c9ad8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:28:33.427209    3756 start.go:356] acquired machines lock for "kubenet-20220604161352-5712" in 0s
	I0604 16:28:33.427209    3756 start.go:91] Provisioning new machine with config: &{Name:kubenet-20220604161352-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubenet-20220604161352-5712 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 16:28:33.427209    3756 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:28:33.431174    3756 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:28:33.431174    3756 start.go:165] libmachine.API.Create for "kubenet-20220604161352-5712" (driver="docker")
	I0604 16:28:33.431174    3756 client.go:168] LocalClient.Create starting
	I0604 16:28:33.432174    3756 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:28:33.432174    3756 main.go:134] libmachine: Decoding PEM data...
	I0604 16:28:33.432174    3756 main.go:134] libmachine: Parsing certificate...
	I0604 16:28:33.432174    3756 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:28:33.432174    3756 main.go:134] libmachine: Decoding PEM data...
	I0604 16:28:33.433185    3756 main.go:134] libmachine: Parsing certificate...
	I0604 16:28:33.441177    3756 cli_runner.go:164] Run: docker network inspect kubenet-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:28:34.483071    3756 cli_runner.go:211] docker network inspect kubenet-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:28:34.483071    3756 cli_runner.go:217] Completed: docker network inspect kubenet-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.041695s)
	I0604 16:28:34.490458    3756 network_create.go:272] running [docker network inspect kubenet-20220604161352-5712] to gather additional debugging logs...
	I0604 16:28:34.490458    3756 cli_runner.go:164] Run: docker network inspect kubenet-20220604161352-5712
	W0604 16:28:35.551234    3756 cli_runner.go:211] docker network inspect kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:28:35.551234    3756 cli_runner.go:217] Completed: docker network inspect kubenet-20220604161352-5712: (1.0606147s)
	I0604 16:28:35.551234    3756 network_create.go:275] error running [docker network inspect kubenet-20220604161352-5712]: docker network inspect kubenet-20220604161352-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220604161352-5712
	I0604 16:28:35.551439    3756 network_create.go:277] output of [docker network inspect kubenet-20220604161352-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220604161352-5712
	
	** /stderr **
	I0604 16:28:35.558969    3756 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:28:36.607035    3756 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0480542s)
	I0604 16:28:36.627094    3756 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00064a320] misses:0}
	I0604 16:28:36.627094    3756 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:28:36.627094    3756 network_create.go:115] attempt to create docker network kubenet-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0604 16:28:36.634500    3756 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220604161352-5712
	W0604 16:28:37.678944    3756 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:28:37.678944    3756 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220604161352-5712: (1.0438948s)
	E0604 16:28:37.678944    3756 network_create.go:104] error while trying to create docker network kubenet-20220604161352-5712 192.168.49.0/24: create docker network kubenet-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a4fdff3c0b6419e18875000e88cfca172c8aa626b972119555dedb39f34243f4 (br-a4fdff3c0b64): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	W0604 16:28:37.678944    3756 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a4fdff3c0b6419e18875000e88cfca172c8aa626b972119555dedb39f34243f4 (br-a4fdff3c0b64): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220604161352-5712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a4fdff3c0b6419e18875000e88cfca172c8aa626b972119555dedb39f34243f4 (br-a4fdff3c0b64): conflicts with network c6188639961469d6e1ffb24e3f3dc8c7f2835092e4ac5b350f455ae9eed1873e (br-c61886399614): networks have overlapping IPv4
	
	I0604 16:28:37.692954    3756 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:28:38.760409    3756 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0672458s)
	I0604 16:28:38.767898    3756 cli_runner.go:164] Run: docker volume create kubenet-20220604161352-5712 --label name.minikube.sigs.k8s.io=kubenet-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:28:39.841108    3756 cli_runner.go:211] docker volume create kubenet-20220604161352-5712 --label name.minikube.sigs.k8s.io=kubenet-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:28:39.841108    3756 cli_runner.go:217] Completed: docker volume create kubenet-20220604161352-5712 --label name.minikube.sigs.k8s.io=kubenet-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: (1.0731979s)
	I0604 16:28:39.841108    3756 client.go:171] LocalClient.Create took 6.4098635s
	I0604 16:28:41.866698    3756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:28:41.873423    3756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712
	W0604 16:28:42.949392    3756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:28:42.949392    3756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: (1.0758238s)
	I0604 16:28:42.949392    3756 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:28:43.244568    3756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712
	W0604 16:28:44.297339    3756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:28:44.297339    3756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: (1.0527599s)
	W0604 16:28:44.297339    3756 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	
	W0604 16:28:44.297339    3756 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:28:44.307336    3756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:28:44.314335    3756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712
	W0604 16:28:45.339493    3756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:28:45.339493    3756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: (1.0251461s)
	I0604 16:28:45.339493    3756 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:28:45.652408    3756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712
	W0604 16:28:46.674069    3756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:28:46.674069    3756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: (1.0216491s)
	W0604 16:28:46.674069    3756 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	
	W0604 16:28:46.674069    3756 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:28:46.674069    3756 start.go:134] duration metric: createHost completed in 13.2467129s
	I0604 16:28:46.674069    3756 start.go:81] releasing machines lock for "kubenet-20220604161352-5712", held for 13.2467129s
	W0604 16:28:46.674601    3756 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for kubenet-20220604161352-5712 container: docker volume create kubenet-20220604161352-5712 --label name.minikube.sigs.k8s.io=kubenet-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220604161352-5712': mkdir /var/lib/docker/volumes/kubenet-20220604161352-5712: read-only file system
	I0604 16:28:46.691385    3756 cli_runner.go:164] Run: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:47.780114    3756 cli_runner.go:211] docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:47.780223    3756 cli_runner.go:217] Completed: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: (1.0885851s)
	I0604 16:28:47.780223    3756 delete.go:82] Unable to get host status for kubenet-20220604161352-5712, assuming it has already been deleted: state: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	W0604 16:28:47.780223    3756 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubenet-20220604161352-5712 container: docker volume create kubenet-20220604161352-5712 --label name.minikube.sigs.k8s.io=kubenet-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220604161352-5712': mkdir /var/lib/docker/volumes/kubenet-20220604161352-5712: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubenet-20220604161352-5712 container: docker volume create kubenet-20220604161352-5712 --label name.minikube.sigs.k8s.io=kubenet-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220604161352-5712': mkdir /var/lib/docker/volumes/kubenet-20220604161352-5712: read-only file system
	
	I0604 16:28:47.780223    3756 start.go:614] Will try again in 5 seconds ...
	I0604 16:28:52.789560    3756 start.go:352] acquiring machines lock for kubenet-20220604161352-5712: {Name:mkef10e063497f59af9eb2f27f8d242635c9ad8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 16:28:52.789843    3756 start.go:356] acquired machines lock for "kubenet-20220604161352-5712" in 216µs
	I0604 16:28:52.789843    3756 start.go:94] Skipping create...Using existing machine configuration
	I0604 16:28:52.789843    3756 fix.go:55] fixHost starting: 
	I0604 16:28:52.803475    3756 cli_runner.go:164] Run: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:53.867605    3756 cli_runner.go:211] docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:53.867934    3756 cli_runner.go:217] Completed: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: (1.0638856s)
	I0604 16:28:53.868015    3756 fix.go:103] recreateIfNeeded on kubenet-20220604161352-5712: state= err=unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:28:53.868086    3756 fix.go:108] machineExists: false. err=machine does not exist
	I0604 16:28:53.871007    3756 out.go:177] * docker "kubenet-20220604161352-5712" container is missing, will recreate.
	I0604 16:28:53.874110    3756 delete.go:124] DEMOLISHING kubenet-20220604161352-5712 ...
	I0604 16:28:53.885774    3756 cli_runner.go:164] Run: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:54.932888    3756 cli_runner.go:211] docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:54.933025    3756 cli_runner.go:217] Completed: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: (1.0470058s)
	W0604 16:28:54.933093    3756 stop.go:75] unable to get state: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:28:54.933160    3756 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:28:54.947173    3756 cli_runner.go:164] Run: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:55.992941    3756 cli_runner.go:211] docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:55.992941    3756 cli_runner.go:217] Completed: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: (1.0457565s)
	I0604 16:28:55.992941    3756 delete.go:82] Unable to get host status for kubenet-20220604161352-5712, assuming it has already been deleted: state: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:28:56.000716    3756 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubenet-20220604161352-5712
	W0604 16:28:57.055347    3756 cli_runner.go:211] docker container inspect -f {{.Id}} kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:28:57.055347    3756 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubenet-20220604161352-5712: (1.0546198s)
	I0604 16:28:57.055347    3756 kic.go:356] could not find the container kubenet-20220604161352-5712 to remove it. will try anyways
	I0604 16:28:57.063746    3756 cli_runner.go:164] Run: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}
	W0604 16:28:58.107068    3756 cli_runner.go:211] docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:28:58.107145    3756 cli_runner.go:217] Completed: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: (1.0430952s)
	W0604 16:28:58.107194    3756 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:28:58.120162    3756 cli_runner.go:164] Run: docker exec --privileged -t kubenet-20220604161352-5712 /bin/bash -c "sudo init 0"
	W0604 16:28:59.192832    3756 cli_runner.go:211] docker exec --privileged -t kubenet-20220604161352-5712 /bin/bash -c "sudo init 0" returned with exit code 1
	I0604 16:28:59.192832    3756 cli_runner.go:217] Completed: docker exec --privileged -t kubenet-20220604161352-5712 /bin/bash -c "sudo init 0": (1.0726576s)
	I0604 16:28:59.192832    3756 oci.go:625] error shutdown kubenet-20220604161352-5712: docker exec --privileged -t kubenet-20220604161352-5712 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:00.215320    3756 cli_runner.go:164] Run: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}
	W0604 16:29:01.205624    3756 cli_runner.go:211] docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:29:01.205624    3756 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:01.205624    3756 oci.go:639] temporary error: container kubenet-20220604161352-5712 status is  but expect it to be exited
	I0604 16:29:01.205624    3756 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:01.676415    3756 cli_runner.go:164] Run: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}
	W0604 16:29:02.700834    3756 cli_runner.go:211] docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:29:02.700834    3756 cli_runner.go:217] Completed: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: (1.0243581s)
	I0604 16:29:02.700834    3756 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:02.700834    3756 oci.go:639] temporary error: container kubenet-20220604161352-5712 status is  but expect it to be exited
	I0604 16:29:02.700834    3756 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:03.607857    3756 cli_runner.go:164] Run: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}
	W0604 16:29:04.650601    3756 cli_runner.go:211] docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:29:04.650601    3756 cli_runner.go:217] Completed: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: (1.0427327s)
	I0604 16:29:04.650601    3756 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:04.650601    3756 oci.go:639] temporary error: container kubenet-20220604161352-5712 status is  but expect it to be exited
	I0604 16:29:04.650601    3756 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:05.296735    3756 cli_runner.go:164] Run: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}
	W0604 16:29:06.340720    3756 cli_runner.go:211] docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:29:06.340954    3756 cli_runner.go:217] Completed: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: (1.0439729s)
	I0604 16:29:06.341110    3756 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:06.341183    3756 oci.go:639] temporary error: container kubenet-20220604161352-5712 status is  but expect it to be exited
	I0604 16:29:06.341212    3756 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:07.465571    3756 cli_runner.go:164] Run: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}
	W0604 16:29:08.485800    3756 cli_runner.go:211] docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:29:08.485800    3756 cli_runner.go:217] Completed: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: (1.0201181s)
	I0604 16:29:08.485800    3756 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:08.486098    3756 oci.go:639] temporary error: container kubenet-20220604161352-5712 status is  but expect it to be exited
	I0604 16:29:08.486098    3756 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:10.008897    3756 cli_runner.go:164] Run: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}
	W0604 16:29:11.005800    3756 cli_runner.go:211] docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:29:11.005800    3756 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:11.005800    3756 oci.go:639] temporary error: container kubenet-20220604161352-5712 status is  but expect it to be exited
	I0604 16:29:11.005800    3756 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:14.070161    3756 cli_runner.go:164] Run: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}
	W0604 16:29:15.109200    3756 cli_runner.go:211] docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}} returned with exit code 1
	I0604 16:29:15.109328    3756 cli_runner.go:217] Completed: docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: (1.0390275s)
	I0604 16:29:15.109328    3756 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:15.109328    3756 oci.go:639] temporary error: container kubenet-20220604161352-5712 status is  but expect it to be exited
	I0604 16:29:15.109328    3756 oci.go:88] couldn't shut down kubenet-20220604161352-5712 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubenet-20220604161352-5712": docker container inspect kubenet-20220604161352-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	 
	I0604 16:29:15.116503    3756 cli_runner.go:164] Run: docker rm -f -v kubenet-20220604161352-5712
	I0604 16:29:16.129049    3756 cli_runner.go:217] Completed: docker rm -f -v kubenet-20220604161352-5712: (1.0125345s)
	I0604 16:29:16.137187    3756 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubenet-20220604161352-5712
	W0604 16:29:17.181511    3756 cli_runner.go:211] docker container inspect -f {{.Id}} kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:29:17.181511    3756 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubenet-20220604161352-5712: (1.0441s)
	I0604 16:29:17.189435    3756 cli_runner.go:164] Run: docker network inspect kubenet-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:29:18.196956    3756 cli_runner.go:211] docker network inspect kubenet-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:29:18.196956    3756 cli_runner.go:217] Completed: docker network inspect kubenet-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0074003s)
	I0604 16:29:18.205003    3756 network_create.go:272] running [docker network inspect kubenet-20220604161352-5712] to gather additional debugging logs...
	I0604 16:29:18.205550    3756 cli_runner.go:164] Run: docker network inspect kubenet-20220604161352-5712
	W0604 16:29:19.227681    3756 cli_runner.go:211] docker network inspect kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:29:19.227865    3756 cli_runner.go:217] Completed: docker network inspect kubenet-20220604161352-5712: (1.02212s)
	I0604 16:29:19.227914    3756 network_create.go:275] error running [docker network inspect kubenet-20220604161352-5712]: docker network inspect kubenet-20220604161352-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220604161352-5712
	I0604 16:29:19.227914    3756 network_create.go:277] output of [docker network inspect kubenet-20220604161352-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220604161352-5712
	
	** /stderr **
	W0604 16:29:19.229126    3756 delete.go:139] delete failed (probably ok) <nil>
	I0604 16:29:19.229169    3756 fix.go:115] Sleeping 1 second for extra luck!
	I0604 16:29:20.241755    3756 start.go:131] createHost starting for "" (driver="docker")
	I0604 16:29:20.246475    3756 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0604 16:29:20.246564    3756 start.go:165] libmachine.API.Create for "kubenet-20220604161352-5712" (driver="docker")
	I0604 16:29:20.246564    3756 client.go:168] LocalClient.Create starting
	I0604 16:29:20.247353    3756 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0604 16:29:20.247353    3756 main.go:134] libmachine: Decoding PEM data...
	I0604 16:29:20.247353    3756 main.go:134] libmachine: Parsing certificate...
	I0604 16:29:20.247923    3756 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0604 16:29:20.248198    3756 main.go:134] libmachine: Decoding PEM data...
	I0604 16:29:20.248198    3756 main.go:134] libmachine: Parsing certificate...
	I0604 16:29:20.255880    3756 cli_runner.go:164] Run: docker network inspect kubenet-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0604 16:29:21.276827    3756 cli_runner.go:211] docker network inspect kubenet-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0604 16:29:21.276827    3756 cli_runner.go:217] Completed: docker network inspect kubenet-20220604161352-5712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0209358s)
	I0604 16:29:21.285296    3756 network_create.go:272] running [docker network inspect kubenet-20220604161352-5712] to gather additional debugging logs...
	I0604 16:29:21.285296    3756 cli_runner.go:164] Run: docker network inspect kubenet-20220604161352-5712
	W0604 16:29:22.281242    3756 cli_runner.go:211] docker network inspect kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:29:22.281242    3756 network_create.go:275] error running [docker network inspect kubenet-20220604161352-5712]: docker network inspect kubenet-20220604161352-5712: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220604161352-5712
	I0604 16:29:22.281242    3756 network_create.go:277] output of [docker network inspect kubenet-20220604161352-5712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220604161352-5712
	
	** /stderr **
	I0604 16:29:22.288983    3756 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0604 16:29:23.301605    3756 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0123403s)
	I0604 16:29:23.322778    3756 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00064a320] amended:false}} dirty:map[] misses:0}
	I0604 16:29:23.322778    3756 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:29:23.340238    3756 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00064a320] amended:true}} dirty:map[192.168.49.0:0xc00064a320 192.168.58.0:0xc00064a580] misses:0}
	I0604 16:29:23.340238    3756 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0604 16:29:23.340357    3756 network_create.go:115] attempt to create docker network kubenet-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0604 16:29:23.350140    3756 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220604161352-5712
	W0604 16:29:24.365462    3756 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:29:24.365462    3756 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220604161352-5712: (1.0153109s)
	E0604 16:29:24.365462    3756 network_create.go:104] error while trying to create docker network kubenet-20220604161352-5712 192.168.58.0/24: create docker network kubenet-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 448d7cad44c26fc81d88662f4043c72472e890658775b7f231e4ae09b42dadb4 (br-448d7cad44c2): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	W0604 16:29:24.365462    3756 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 448d7cad44c26fc81d88662f4043c72472e890658775b7f231e4ae09b42dadb4 (br-448d7cad44c2): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220604161352-5712 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220604161352-5712: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 448d7cad44c26fc81d88662f4043c72472e890658775b7f231e4ae09b42dadb4 (br-448d7cad44c2): conflicts with network 1140b1ac4d94029cff164b56fb7fa3f71db5eeb5da2d2199463f321dcb6fd9fc (br-1140b1ac4d94): networks have overlapping IPv4
	
	I0604 16:29:24.379798    3756 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0604 16:29:25.442708    3756 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.062622s)
	I0604 16:29:25.450115    3756 cli_runner.go:164] Run: docker volume create kubenet-20220604161352-5712 --label name.minikube.sigs.k8s.io=kubenet-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true
	W0604 16:29:26.447270    3756 cli_runner.go:211] docker volume create kubenet-20220604161352-5712 --label name.minikube.sigs.k8s.io=kubenet-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0604 16:29:26.447329    3756 client.go:171] LocalClient.Create took 6.2006965s
	I0604 16:29:28.466142    3756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:29:28.472257    3756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712
	W0604 16:29:29.489059    3756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:29:29.489059    3756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: (1.0165988s)
	I0604 16:29:29.489293    3756 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:29.839151    3756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712
	W0604 16:29:30.830754    3756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712 returned with exit code 1
	W0604 16:29:30.830754    3756 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	
	W0604 16:29:30.830754    3756 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:30.841498    3756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:29:30.847514    3756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712
	W0604 16:29:31.840835    3756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:29:31.840835    3756 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:32.079326    3756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712
	W0604 16:29:33.138225    3756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:29:33.138225    3756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: (1.0588874s)
	W0604 16:29:33.138225    3756 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	
	W0604 16:29:33.138225    3756 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:33.138225    3756 start.go:134] duration metric: createHost completed in 12.8960732s
	I0604 16:29:33.148908    3756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 16:29:33.154879    3756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712
	W0604 16:29:34.185499    3756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:29:34.185563    3756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: (1.0304719s)
	I0604 16:29:34.185738    3756 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:34.444644    3756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712
	W0604 16:29:35.471084    3756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:29:35.471084    3756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: (1.0262418s)
	W0604 16:29:35.471084    3756 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	
	W0604 16:29:35.471084    3756 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:35.482266    3756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0604 16:29:35.488301    3756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712
	W0604 16:29:36.520288    3756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:29:36.520288    3756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: (1.0319427s)
	I0604 16:29:36.520288    3756 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:36.742441    3756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712
	W0604 16:29:37.780813    3756 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712 returned with exit code 1
	I0604 16:29:37.780813    3756 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: (1.0383604s)
	W0604 16:29:37.780813    3756 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	
	W0604 16:29:37.780813    3756 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220604161352-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220604161352-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220604161352-5712
	I0604 16:29:37.780813    3756 fix.go:57] fixHost completed within 44.9904713s
	I0604 16:29:37.780813    3756 start.go:81] releasing machines lock for "kubenet-20220604161352-5712", held for 44.9904713s
	W0604 16:29:37.781561    3756 out.go:239] * Failed to start docker container. Running "minikube delete -p kubenet-20220604161352-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220604161352-5712 container: docker volume create kubenet-20220604161352-5712 --label name.minikube.sigs.k8s.io=kubenet-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220604161352-5712': mkdir /var/lib/docker/volumes/kubenet-20220604161352-5712: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p kubenet-20220604161352-5712" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220604161352-5712 container: docker volume create kubenet-20220604161352-5712 --label name.minikube.sigs.k8s.io=kubenet-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220604161352-5712': mkdir /var/lib/docker/volumes/kubenet-20220604161352-5712: read-only file system
	
	I0604 16:29:37.786882    3756 out.go:177] 
	W0604 16:29:37.789111    3756 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220604161352-5712 container: docker volume create kubenet-20220604161352-5712 --label name.minikube.sigs.k8s.io=kubenet-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220604161352-5712': mkdir /var/lib/docker/volumes/kubenet-20220604161352-5712: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220604161352-5712 container: docker volume create kubenet-20220604161352-5712 --label name.minikube.sigs.k8s.io=kubenet-20220604161352-5712 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220604161352-5712: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220604161352-5712': mkdir /var/lib/docker/volumes/kubenet-20220604161352-5712: read-only file system
	
	W0604 16:29:37.789111    3756 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0604 16:29:37.789111    3756 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0604 16:29:37.792054    3756 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/kubenet/Start (75.22s)

                                                
                                    

Test pass (50/220)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 21.46
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.47
10 TestDownloadOnly/v1.23.6/json-events 16.96
11 TestDownloadOnly/v1.23.6/preload-exists 0
14 TestDownloadOnly/v1.23.6/kubectl 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.6
16 TestDownloadOnly/DeleteAll 11.38
17 TestDownloadOnly/DeleteAlwaysSucceeds 7.06
18 TestDownloadOnlyKic 45.89
19 TestBinaryMirror 16.3
33 TestErrorSpam/start 20.62
34 TestErrorSpam/status 8.52
35 TestErrorSpam/pause 9.15
36 TestErrorSpam/unpause 9.19
37 TestErrorSpam/stop 66.25
40 TestFunctional/serial/CopySyncFile 0.03
48 TestFunctional/serial/CacheCmd/cache/add_remote 10.73
50 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.36
51 TestFunctional/serial/CacheCmd/cache/list 0.36
54 TestFunctional/serial/CacheCmd/cache/delete 0.69
62 TestFunctional/parallel/ConfigCmd 2.12
64 TestFunctional/parallel/DryRun 12.86
65 TestFunctional/parallel/InternationalLanguage 5.42
71 TestFunctional/parallel/AddonsCmd 3.41
87 TestFunctional/parallel/ProfileCmd/profile_not_create 7.29
89 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
96 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
97 TestFunctional/parallel/ProfileCmd/profile_list 4.42
101 TestFunctional/parallel/ProfileCmd/profile_json_output 4.48
108 TestFunctional/parallel/Version/short 0.37
114 TestFunctional/parallel/ImageCommands/ImageRemove 5.86
117 TestFunctional/delete_addon-resizer_images 2.1
118 TestFunctional/delete_my-image_image 1.03
119 TestFunctional/delete_minikube_cached_images 1.07
125 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 2.85
138 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
152 TestErrorJSONOutput 7.16
155 TestKicCustomNetwork/use_default_bridge_network 229.31
158 TestMainNoArgs 0.33
193 TestNoKubernetes/serial/StartNoK8sWithVersion 0.42
194 TestStoppedBinaryUpgrade/Setup 0.7
263 TestStartStop/group/newest-cni/serial/DeployApp 0
264 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.91
277 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
278 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/json-events (21.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220604151954-5712 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220604151954-5712 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (21.4576028s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (21.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220604151954-5712
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220604151954-5712: exit status 85 (465.5387ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/04 15:19:56
	Running on machine: minikube2
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 15:19:56.676807    7804 out.go:296] Setting OutFile to fd 672 ...
	I0604 15:19:56.741497    7804 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:19:56.741497    7804 out.go:309] Setting ErrFile to fd 676...
	I0604 15:19:56.741497    7804 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0604 15:19:56.752289    7804 root.go:300] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0604 15:19:56.756178    7804 out.go:303] Setting JSON to true
	I0604 15:19:56.758184    7804 start.go:115] hostinfo: {"hostname":"minikube2","uptime":7068,"bootTime":1654348928,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 15:19:56.758184    7804 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 15:19:56.782260    7804 out.go:97] [download-only-20220604151954-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 15:19:56.782523    7804 notify.go:193] Checking for updates...
	W0604 15:19:56.782523    7804 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0604 15:19:56.785073    7804 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 15:19:56.787290    7804 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 15:19:56.790000    7804 out.go:169] MINIKUBE_LOCATION=14123
	I0604 15:19:56.792966    7804 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0604 15:19:56.797234    7804 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0604 15:19:56.797862    7804 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 15:19:59.310874    7804 docker.go:137] docker version: linux-20.10.16
	I0604 15:19:59.319523    7804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:20:01.343828    7804 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.024285s)
	I0604 15:20:01.344528    7804 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-04 15:20:00.3668038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:20:01.370016    7804 out.go:97] Using the docker driver based on user configuration
	I0604 15:20:01.370513    7804 start.go:284] selected driver: docker
	I0604 15:20:01.370513    7804 start.go:806] validating driver "docker" against <nil>
	I0604 15:20:01.391782    7804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:20:03.428911    7804 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0371086s)
	I0604 15:20:03.429164    7804 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-04 15:20:02.4196574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:20:03.429686    7804 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0604 15:20:03.552821    7804 start_flags.go:373] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I0604 15:20:03.553047    7804 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0604 15:20:03.566463    7804 out.go:169] Using Docker Desktop driver with the root privilege
	I0604 15:20:03.569167    7804 cni.go:95] Creating CNI manager for ""
	I0604 15:20:03.569167    7804 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 15:20:03.569167    7804 start_flags.go:306] config:
	{Name:download-only-20220604151954-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220604151954-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 15:20:03.572275    7804 out.go:97] Starting control plane node download-only-20220604151954-5712 in cluster download-only-20220604151954-5712
	I0604 15:20:03.572377    7804 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 15:20:03.574456    7804 out.go:97] Pulling base image ...
	I0604 15:20:03.574529    7804 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0604 15:20:03.574529    7804 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 15:20:03.621879    7804 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0604 15:20:03.621879    7804 cache.go:57] Caching tarball of preloaded images
	I0604 15:20:03.622505    7804 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0604 15:20:03.624989    7804 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0604 15:20:03.625539    7804 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0604 15:20:03.702440    7804 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0604 15:20:04.694192    7804 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 15:20:04.694257    7804 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:20:04.694257    7804 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:20:04.694257    7804 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 15:20:04.695092    7804 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220604151954-5712"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (16.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220604151954-5712 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220604151954-5712 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker: (16.9636866s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (16.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
--- PASS: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220604151954-5712
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220604151954-5712: exit status 85 (597.8931ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/04 15:20:17
	Running on machine: minikube2
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 15:20:17.186392    6540 out.go:296] Setting OutFile to fd 700 ...
	I0604 15:20:17.239382    6540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:20:17.239382    6540 out.go:309] Setting ErrFile to fd 696...
	I0604 15:20:17.239382    6540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0604 15:20:17.249363    6540 root.go:300] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0604 15:20:17.249363    6540 out.go:303] Setting JSON to true
	I0604 15:20:17.252367    6540 start.go:115] hostinfo: {"hostname":"minikube2","uptime":7089,"bootTime":1654348928,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 15:20:17.252367    6540 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 15:20:17.256356    6540 out.go:97] [download-only-20220604151954-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 15:20:17.257216    6540 notify.go:193] Checking for updates...
	I0604 15:20:17.259735    6540 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 15:20:17.262081    6540 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 15:20:17.264714    6540 out.go:169] MINIKUBE_LOCATION=14123
	I0604 15:20:17.267073    6540 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0604 15:20:17.271643    6540 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0604 15:20:17.273155    6540 config.go:178] Loaded profile config "download-only-20220604151954-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0604 15:20:17.273155    6540 start.go:714] api.Load failed for download-only-20220604151954-5712: filestore "download-only-20220604151954-5712": Docker machine "download-only-20220604151954-5712" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0604 15:20:17.273155    6540 driver.go:358] Setting default libvirt URI to qemu:///system
	W0604 15:20:17.273725    6540 start.go:714] api.Load failed for download-only-20220604151954-5712: filestore "download-only-20220604151954-5712": Docker machine "download-only-20220604151954-5712" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0604 15:20:19.810529    6540 docker.go:137] docker version: linux-20.10.16
	I0604 15:20:19.818700    6540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:20:21.775167    6540 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9564474s)
	I0604 15:20:21.776205    6540 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-04 15:20:20.8115545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:20:22.040067    6540 out.go:97] Using the docker driver based on existing profile
	I0604 15:20:22.040591    6540 start.go:284] selected driver: docker
	I0604 15:20:22.040591    6540 start.go:806] validating driver "docker" against &{Name:download-only-20220604151954-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220604151954-5712 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 15:20:22.062739    6540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:20:24.029534    6540 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9666449s)
	I0604 15:20:24.029959    6540 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-04 15:20:23.05867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64
IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,pr
ofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. V
ersion:v0.17.0]] Warnings:<nil>}}
	I0604 15:20:24.079273    6540 cni.go:95] Creating CNI manager for ""
	I0604 15:20:24.079388    6540 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0604 15:20:24.079609    6540 start_flags.go:306] config:
	{Name:download-only-20220604151954-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:download-only-20220604151954-5712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 15:20:24.082895    6540 out.go:97] Starting control plane node download-only-20220604151954-5712 in cluster download-only-20220604151954-5712
	I0604 15:20:24.082895    6540 cache.go:120] Beginning downloading kic base image for docker with docker
	I0604 15:20:24.084808    6540 out.go:97] Pulling base image ...
	I0604 15:20:24.084808    6540 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 15:20:24.084808    6540 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0604 15:20:24.125776    6540 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 15:20:24.125776    6540 cache.go:57] Caching tarball of preloaded images
	I0604 15:20:24.126756    6540 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 15:20:24.131753    6540 out.go:97] Downloading Kubernetes v1.23.6 preload ...
	I0604 15:20:24.131753    6540 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0604 15:20:24.200861    6540 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4?checksum=md5:a6c3f222f3cce2a88e27e126d64eb717 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0604 15:20:25.162853    6540 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0604 15:20:25.162853    6540 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:20:25.162853    6540 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0604 15:20:25.162853    6540 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0604 15:20:25.163391    6540 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0604 15:20:25.163438    6540 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0604 15:20:25.163550    6540 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0604 15:20:30.733882    6540 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0604 15:20:30.734926    6540 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0604 15:20:31.903982    6540 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0604 15:20:31.903982    6540 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-20220604151954-5712\config.json ...
	I0604 15:20:31.906000    6540 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0604 15:20:31.907027    6540 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\windows\amd64\v1.23.6/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220604151954-5712"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.60s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (11.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (11.378622s)
--- PASS: TestDownloadOnly/DeleteAll (11.38s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (7.06s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20220604151954-5712
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20220604151954-5712: (7.0631777s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (7.06s)

                                                
                                    
x
+
TestDownloadOnlyKic (45.89s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20220604152059-5712 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20220604152059-5712 --force --alsologtostderr --driver=docker: (35.8494683s)
helpers_test.go:175: Cleaning up "download-docker-20220604152059-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20220604152059-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20220604152059-5712: (8.9278938s)
--- PASS: TestDownloadOnlyKic (45.89s)

                                                
                                    
x
+
TestBinaryMirror (16.3s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220604152145-5712 --alsologtostderr --binary-mirror http://127.0.0.1:54003 --driver=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220604152145-5712 --alsologtostderr --binary-mirror http://127.0.0.1:54003 --driver=docker: (8.0722079s)
helpers_test.go:175: Cleaning up "binary-mirror-20220604152145-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-20220604152145-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-20220604152145-5712: (7.9821731s)
--- PASS: TestBinaryMirror (16.30s)

                                                
                                    
x
+
TestErrorSpam/start (20.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 start --dry-run: (6.8599792s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 start --dry-run: (6.8736912s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 start --dry-run
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 start --dry-run: (6.8793751s)
--- PASS: TestErrorSpam/start (20.62s)

                                                
                                    
x
+
TestErrorSpam/status (8.52s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 status
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 status: exit status 7 (2.803604s)

                                                
                                                
-- stdout --
	nospam-20220604152324-5712
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:25:02.274122    7716 status.go:258] status error: host: state: unknown state "nospam-20220604152324-5712": docker container inspect nospam-20220604152324-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	E0604 15:25:02.274122    7716 status.go:261] The "nospam-20220604152324-5712" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 status" failed: exit status 7
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 status
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 status: exit status 7 (2.8717531s)

                                                
                                                
-- stdout --
	nospam-20220604152324-5712
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:25:05.144774    6788 status.go:258] status error: host: state: unknown state "nospam-20220604152324-5712": docker container inspect nospam-20220604152324-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	E0604 15:25:05.144774    6788 status.go:261] The "nospam-20220604152324-5712" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 status" failed: exit status 7
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 status
error_spam_test.go:179: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 status: exit status 7 (2.8420756s)

                                                
                                                
-- stdout --
	nospam-20220604152324-5712
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:25:07.989850    9024 status.go:258] status error: host: state: unknown state "nospam-20220604152324-5712": docker container inspect nospam-20220604152324-5712 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	E0604 15:25:07.989953    9024 status.go:261] The "nospam-20220604152324-5712" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:181: "out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 status" failed: exit status 7
--- PASS: TestErrorSpam/status (8.52s)

                                                
                                    
x
+
TestErrorSpam/pause (9.15s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 pause: exit status 80 (3.0714586s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220604152324-5712": docker container inspect nospam-20220604152324-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_241.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 pause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 pause: exit status 80 (3.0628983s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220604152324-5712": docker container inspect nospam-20220604152324-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_241.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 pause" failed: exit status 80
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 pause
error_spam_test.go:179: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 pause: exit status 80 (3.0072084s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220604152324-5712": docker container inspect nospam-20220604152324-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_241.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:181: "out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (9.15s)

                                                
                                    
x
+
TestErrorSpam/unpause (9.19s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 unpause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 unpause: exit status 80 (3.0487819s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220604152324-5712": docker container inspect nospam-20220604152324-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_241.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 unpause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 unpause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 unpause: exit status 80 (3.0812508s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220604152324-5712": docker container inspect nospam-20220604152324-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_241.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 unpause" failed: exit status 80
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 unpause
error_spam_test.go:179: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 unpause: exit status 80 (3.0610169s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220604152324-5712": docker container inspect nospam-20220604152324-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_241.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:181: "out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (9.19s)

                                                
                                    
x
+
TestErrorSpam/stop (66.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 stop
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 stop: exit status 82 (22.1923543s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:25:31.581656    8796 daemonize_windows.go:38] error terminating scheduled stop for profile nospam-20220604152324-5712: stopping schedule-stop service for profile nospam-20220604152324-5712: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20220604152324-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20220604152324-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20220604152324-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_241.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 stop" failed: exit status 82
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 stop
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 stop: exit status 82 (22.1274081s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:25:53.821346    9108 daemonize_windows.go:38] error terminating scheduled stop for profile nospam-20220604152324-5712: stopping schedule-stop service for profile nospam-20220604152324-5712: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20220604152324-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20220604152324-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20220604152324-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_241.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 stop" failed: exit status 82
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 stop
error_spam_test.go:179: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220604152324-5712 stop: exit status 82 (21.926663s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	* Stopping node "nospam-20220604152324-5712"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0604 15:26:15.792196    6512 daemonize_windows.go:38] error terminating scheduled stop for profile nospam-20220604152324-5712: stopping schedule-stop service for profile nospam-20220604152324-5712: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20220604152324-5712": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20220604152324-5712: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20220604152324-5712 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220604152324-5712
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_241.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:181: "out/minikube-windows-amd64.exe -p nospam-20220604152324-5712 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220604152324-5712 stop" failed: exit status 82
--- PASS: TestErrorSpam/stop (66.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\5712\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cache add k8s.gcr.io/pause:3.1: (3.6405731s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cache add k8s.gcr.io/pause:3.3: (3.5295664s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 cache add k8s.gcr.io/pause:latest: (3.5567795s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 config get cpus: exit status 14 (347.557ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 config get cpus: exit status 14 (315.8204ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (12.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.4736192s)

                                                
                                                
-- stdout --
	* [functional-20220604152644-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:33:28.807754    1764 out.go:296] Setting OutFile to fd 688 ...
	I0604 15:33:28.861397    1764 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:33:28.861397    1764 out.go:309] Setting ErrFile to fd 816...
	I0604 15:33:28.861397    1764 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:33:28.872756    1764 out.go:303] Setting JSON to false
	I0604 15:33:28.874683    1764 start.go:115] hostinfo: {"hostname":"minikube2","uptime":7880,"bootTime":1654348928,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 15:33:28.874683    1764 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 15:33:28.878918    1764 out.go:177] * [functional-20220604152644-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 15:33:28.882550    1764 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 15:33:28.884958    1764 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 15:33:28.887924    1764 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 15:33:28.890574    1764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 15:33:28.893322    1764 config.go:178] Loaded profile config "functional-20220604152644-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 15:33:28.894574    1764 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 15:33:31.622211    1764 docker.go:137] docker version: linux-20.10.16
	I0604 15:33:31.631335    1764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:33:33.772420    1764 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1410641s)
	I0604 15:33:33.773178    1764 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:45 OomKillDisable:true NGoroutines:48 SystemTime:2022-06-04 15:33:32.6463308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:33:33.778714    1764 out.go:177] * Using the docker driver based on existing profile
	I0604 15:33:33.781288    1764 start.go:284] selected driver: docker
	I0604 15:33:33.781288    1764 start.go:806] validating driver "docker" against &{Name:functional-20220604152644-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220604152644-5712 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 15:33:33.781892    1764 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 15:33:34.014821    1764 out.go:177] 
	W0604 15:33:34.016563    1764 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0604 15:33:34.020518    1764 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:983: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --dry-run --alsologtostderr -v=1 --driver=docker: (7.3835614s)
--- PASS: TestFunctional/parallel/DryRun (12.86s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220604152644-5712 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.4243841s)

                                                
                                                
-- stdout --
	* [functional-20220604152644-5712] minikube v1.26.0-beta.1 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 15:33:14.671010    8876 out.go:296] Setting OutFile to fd 920 ...
	I0604 15:33:14.730553    8876 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:33:14.730553    8876 out.go:309] Setting ErrFile to fd 848...
	I0604 15:33:14.730553    8876 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0604 15:33:14.744563    8876 out.go:303] Setting JSON to false
	I0604 15:33:14.746545    8876 start.go:115] hostinfo: {"hostname":"minikube2","uptime":7866,"bootTime":1654348928,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0604 15:33:14.746545    8876 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0604 15:33:14.749544    8876 out.go:177] * [functional-20220604152644-5712] minikube v1.26.0-beta.1 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0604 15:33:14.753577    8876 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0604 15:33:14.755587    8876 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0604 15:33:14.757587    8876 out.go:177]   - MINIKUBE_LOCATION=14123
	I0604 15:33:14.760545    8876 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 15:33:14.762562    8876 config.go:178] Loaded profile config "functional-20220604152644-5712": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0604 15:33:14.763552    8876 driver.go:358] Setting default libvirt URI to qemu:///system
	I0604 15:33:17.602203    8876 docker.go:137] docker version: linux-20.10.16
	I0604 15:33:17.610635    8876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0604 15:33:19.730773    8876 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.119365s)
	I0604 15:33:19.735760    8876 info.go:265] docker info: {ID:YXAZ:T6WJ:PFTF:LH6L:C7AG:BGSI:AVIS:YZEX:QFCR:2E3B:WKKV:BH4Z Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-04 15:33:18.7041256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0604 15:33:19.738753    8876 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0604 15:33:19.740749    8876 start.go:284] selected driver: docker
	I0604 15:33:19.740749    8876 start.go:806] validating driver "docker" against &{Name:functional-20220604152644-5712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220604152644-5712 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0604 15:33:19.740749    8876 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 15:33:19.812302    8876 out.go:177] 
	W0604 15:33:19.813572    8876 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0604 15:33:19.816164    8876 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (5.42s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (3.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 addons list: (3.0526288s)
functional_test.go:1631: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (3.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (3.0317511s)
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (4.2551876s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20220604152644-5712 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20220604152644-5712 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 7876: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (4.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Done: out/minikube-windows-amd64.exe profile list: (4.0677835s)
functional_test.go:1310: Took "4.0679517s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1324: Took "347.0637ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (4.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (4.1000728s)
functional_test.go:1361: Took "4.100374s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1374: Took "376.2878ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 version --short
--- PASS: TestFunctional/parallel/Version/short (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (5.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image rm gcr.io/google-containers/addon-resizer:functional-20220604152644-5712

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image rm gcr.io/google-containers/addon-resizer:functional-20220604152644-5712: (2.9024436s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220604152644-5712 image ls: (2.9554385s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (5.86s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (2.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Done: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: (1.0237042s)
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220604152644-5712
functional_test.go:185: (dbg) Done: docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220604152644-5712: (1.0577165s)
--- PASS: TestFunctional/delete_addon-resizer_images (2.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (1.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220604152644-5712
functional_test.go:193: (dbg) Done: docker rmi -f localhost/my-image:functional-20220604152644-5712: (1.0206986s)
--- PASS: TestFunctional/delete_my-image_image (1.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (1.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220604152644-5712
functional_test.go:201: (dbg) Done: docker rmi -f minikube-local-cache-test:functional-20220604152644-5712: (1.0559239s)
--- PASS: TestFunctional/delete_minikube_cached_images (1.07s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (2.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220604153841-5712 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220604153841-5712 addons enable ingress-dns --alsologtostderr -v=5: (2.8538168s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (2.85s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (7.16s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20220604154209-5712 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20220604154209-5712 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (360.8644ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"95f53aa4-08e0-43e6-a892-4d0e5326b282","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220604154209-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e2dbfb6-b89c-4476-98fa-b7fe46497b34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"6c85b06d-d817-472b-9058-a38ef5239008","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"289f901c-9b93-41a7-b905-6b475f76382b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14123"}}
	{"specversion":"1.0","id":"2e5a38ec-fde4-42ec-a306-418ff6ec0c4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2b600320-391f-4e8b-b1c9-5663e8046548","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220604154209-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20220604154209-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20220604154209-5712: (6.8027425s)
--- PASS: TestErrorJSONOutput (7.16s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (229.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220604154623-5712 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220604154623-5712 --network=bridge: (3m9.1968503s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0798682s)
helpers_test.go:175: Cleaning up "docker-network-20220604154623-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220604154623-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220604154623-5712: (39.0190506s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (229.31s)

                                                
                                    
x
+
TestMainNoArgs (0.33s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220604161047-5712 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220604161047-5712 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (417.5879ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220604161047-5712] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14123
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220604162348-5712 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220604162348-5712 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.9093959s)
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/220)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220604152644-5712 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2929981052\001
functional_test.go:1069: (dbg) Non-zero exit: docker build -t minikube-local-cache-test:functional-20220604152644-5712 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2929981052\001: exit status 1 (1.0600709s)

                                                
                                                
** stderr ** 
	#1 [internal] load build definition from Dockerfile
	#1 sha256:a9be4915a75b34a6cf4f336d4f44bb7a64bd2b33520142e3010859257977b90d
	#1 ERROR: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system
	
	#2 [internal] load .dockerignore
	#2 sha256:faf90cdce6de3ee6e18858bd90de7369e6c25a3ec713f3f9beaa35a72f895262
	#2 ERROR: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system
	------
	 > [internal] load .dockerignore:
	------
	------
	 > [internal] load build definition from Dockerfile:
	------
	failed to solve with frontend dockerfile.v0: failed to read dockerfile: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system

                                                
                                                
** /stderr **
functional_test.go:1071: failed to build docker image, skipping local test: exit status 1
--- SKIP: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220604152644-5712 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:908: output didn't produce a URL
functional_test.go:902: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220604152644-5712 --alsologtostderr -v=1] ...
helpers_test.go:500: unable to terminate pid 3964: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220604161926-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220604161926-5712
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220604161926-5712: (7.2660083s)
--- SKIP: TestStartStop/group/disable-driver-mounts (7.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (7.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220604161352-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20220604161352-5712

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20220604161352-5712: (7.5191342s)
--- SKIP: TestNetworkPlugins/group/flannel (7.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (7.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220604161400-5712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-flannel-20220604161400-5712

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-flannel-20220604161400-5712: (7.4026212s)
--- SKIP: TestNetworkPlugins/group/custom-flannel (7.40s)

                                                
                                    
Copied to clipboard